Thermal Analysis. Fundamentals and Applications to Material Characterization Proceedings of the International Seminar: Thermal Analysis and Rheology. Ferrol, Spain, 30 Juny – 4 July 2003
Ramón Artiaga Díaz (ed.)
A Coruña 2005
Universidade da Coruña Servizo de Publicacións
Thermal Analysis. Fundamentals and Applications to Material Characterization. Edición a cargo de Ramón Artiaga Díaz. A Coruña. Universidade da Coruña, Servizo de Publicacións. 2005. xiv +288 páxinas. 17 x 24 cm. Cursos_Congresos_Simposios nº 80. Índice: pp. ix. Depósito legal: C-268-2006. ISBN: 84-9749100-9.
Edición Universidade da Coruña, Servizo de Publicacións http://www. udc.es/publicaciones Coa colaboración de TA Instrumentes © Universidade da Coruña
Distribución Galicia:
CONSORCIO EDITORIAL GALEGO. Estrada da Estación 70-A, 36818, A Portela. Redondela (Pontevedra). Tel. 986 405 051. Fax: 986 404 935. Correo electrónico:
[email protected]
España:
BREOGÁN. C/ Lanuza, 11. 28022, Madrid. Tel. 91 725 90 72. Fax: 91 713 06 31. Correo electrónico:
[email protected]. Web: http://www.breogan.org
Deseño de cuberta: Julia Núñez Calo Imprime:
NINO-Centro de Impresión Digital. C/ Rosalía de Castro, 58 15702 Santiago de Compostela
Reservados todos os dereitos. Nin a totalidade nin parte deste libro pode reproducirse ou transmitirse por ningún procedemento electrónico ou mecánico, incluindo fotocopia, gravación magnética ou calquera almacenamento de información e sistema de recuperación, sen o permiso previo e por escrito das persoas titulares do copyright.
In memoriam
Prof. Lisardo Nuñez Regueira passed away on September 1st 2005, when this book was in press. He was a close colleague and friend. He promoted the seminar leading to the edition of this book.
This page intentionally left blank
To María and Rocío
This page intentionally left blank
ix
Contents Foreword Acknowledgements
xi xiii
1. Fundamentals of TGA and SDT Weibing Xu, Sen Li, Nathan Whitely and Wei-Ping Pan
1
2. An Introduction to the Techniques of Differential Scanning Calorimetry (DSC) and Modulated DSC® Leonard C. Thomas
9
3. Thermal Analysis in Thermoset Characterization R. Bruce Prime
27
4. Characterization of Pharmaceutical Materials by Thermal Analysis Leonard C. Thomas
47
5. The Application of Thermal Analysis in the Study of Metallic Materials Angel Varela and Ana García
87
6. Thermal Analysis of Inorganic Materials José L. Mier
99
7. Characterization of Coal by Thermal Analysis Methods Sen Li, Nathan Whitely, Weibing Xu and Wei-Ping Pan
111
8. Characterisation of Polymer Materials Using FT-IR And DSC Techniques Pere Pagès
121
9. Characterization of Polymeric Materials by Thermal Analysis, Spectroscopy and Microscopic Techniques Nathan Whitely, Weibing Xu, Sen Li and Wei-Ping Pan
141
10. Energy Evaluation of Materials by Bomb Calorimetry José A. Rodríguez and Jorge Proupín
155
11. Introduction to the Viscoelastic Response in Polymers María L. Cerrada
167
12. Fundamentals of DMA Ramón Artiaga and Ana García
183
13. Dynamic Mechanical Analysis of Thermosetting Materials R. Bruce Prime
207
14. Fundamentals and Applications of DEA Lisardo Núñez, Carlos Gracia-Fernández, Silvia Gómez-Barreiro
225
15. Dielectric Analysis. Experimental Silvia Gómez-Barreiro, Carlos Gracia-Fernández, Lisardo Núñez Regueira
245
16. Statistical Applications to Thermal Analysis Ricardo Cao and Salvador Naya
265
This page intentionally left blank
xi
FOREWORD The idea for this book arose in the summer of 2003, during the seminar on Thermal Analysis and Rheology that took place in Ferrol. Some of the lecturers and attendees agreed that it would be helpful to have a book dealing with the techniques and applications of thermal analysis, but following a similar approach to one taken in the seminar. Such a text would be helpful for beginners and experienced practitioners who just wanted to get an accurate insight and put what they learned into practice. This book provides an overview of thermal analysis techniques. It focuses on the basic principles and looks at their application to polymers, pharmaceuticals, coals, metals and other inorganic materials. The text was conceived as a reference book and practical guide for material researchers, engineers and technologists who use thermal analysis. It also provides an academic approach for university students. The expertise of the contributors spans several fields, including industrial R&D on polymers, instrument development and research on materials characterization. A more academic approach is given by teaching staff from the Thermal Analysis research groups of the Universities of Santiago de Compostela and Coruña, who had been involved in organising the seminar mentioned earlier. The techniques covered in this book are DSC, M-DSC, TGA, Simultaneous DTA-TGA, Bomb Calorimetry, DEA and DMA. The contents were classified according to specific topics related to different areas of interest within thermal analysis. Therefore, apart from the chapters dedicated to the fundamentals of the different techniques, there were others devoted to specific applications: thermosets, pharmaceuticals, metals and inorganic materials, coal, evaluation of the power content of materials and viscoelastic behaviour of polymes. The chapter authored by P. Pages from the Universitat Politecnica de Catalunya exemplifies an application to material characterization where thermal analysis techniques, among others, play an important role. A final chapter was included, emphasizing how important it is to consider the mathematical treatment of thermal analysis data. This chapter introduces smoothing/fitting techniques and pattern recognition.
This page intentionally left blank
xiii ACKNOWLEDGEMENTS Many thanks are sent to TA Instruments, Universidade da Coruña, Xunta de Galicia and Aginsu S. L. They kindly supported the seminar whose proceedings were the starting point of this book. My special thanks go to Sergio Ruiz from TA Instruments for also supporting the Seminar and encouraging the production of this book. I am grateful to all the contributors, especially Bruce Prime, Wei-Ping Pan and Leonard Thomas, for making an additional effort of coming from the USA to participate in the Seminar. Finally, I wish to extend my thanks to Ana Demitroff for her revision of the English in some parts of the book.
This page intentionally left blank
Fundamentals of TGA and SDT Weibing Xu, Sen Li, Nathan Whitely and Wei-Ping Pan Thermal Analysis Laboratory, Materials Characterization Center, Western Kentucky University, Bowling Green, KY 42101
[email protected] Thermal analysis is one of the most useful methods of analysis in collecting both physical and chemical information. Probably the most used technique of thermal analysis is thermogravimetric analysis (TGA). TGA is used in all types of applications, providing information about the bonding of components within the sample. TGA can become an even stronger analytical technique when coupled with other thermal analysis techniques such as differential scanning calorimetry (DSC) and spectroscopic techniques such as Fourier transform infrared spectroscopy (FTIR) and mass spectrometry (MS). TGA measures the absolute amount and rate of change in weight of a sample as either functions of time or temperature in a controlled environment. TGA has a wide range of properties that can be measured such as thermal stability, oxidative stability, effects of different atmospheres, moisture and volatile content, and sometimes the composition of multi-component systems. TGA determines if and how different components within a material are bonded differently. When TGA is coupled with DSC or differential thermal analysis (DTA), the mode of analysis is called simultaneous DSC-TGA (or DTA) (SDT). SDT measures the amount and rate of change in weight, but also measures the heat flow of the sample as a conventional DSC does. SDT measures the same properties as TGA, but extends the list to include heats of reactions, melts, and boiling points. The three most important signals that TGA collects while it is analyzing the sample are weight, rate of weight change—differential thermogravimetry, and temperature. A differential thermogravimetry curve—DTG—is generated as the first derivative of the weight with respect to temperature or time. The DTG curve can be used to provide both qualitative and quantitative information about the sample. Qualitative modes of analysis include fingerprinting a material and distinguishing between two or more overlapping reactions. Quantitative modes include peak height and temperature at maximum weight loss measurements. The most important aspect of TGA operation is the validity of measurements made. Confidence in data collection can be achieved through regular calibration. For TGA, both mass and temperature calibrations must be performed. Most instrument and software packages possess a relatively automated mass calibration procedure in which the user places certified calibration weight onto the instruments sample platform. Temperature calibrations are performed by the calculation of the Curie point of standard metals. The Curie point of a material is the temperature at which the material loses its magnetic susceptibilities. To perform the Curie point temperature calibration, a strong magnet must be place below or on top of the furnace the cause an initial weight gain or loss at room temperature. Figures 1and 2 show the experimental apparatus for both vertical and horizontal instrumental configurations.
2
WEIBING XU, SEN LI, NATHAN WHITELY AND WEI-PING PAN
Figure 1. Horizontal temperature calibration configuration
Figure 2. Vertical temperature calibration configuration A small lab jack can be used to adjust the magnet’s distance from the sample such that a 2-3% weight gain or weight loss occurs once the magnet is positions above or below the sample. Figure 3 shows the Curie point determination for nickel and alumel. Note that the Curie point is denoted as the offset.
Figure 3. Curie point determination for vertical TGA SDT also has an alternate method for temperature calibration. The melting points of standard materials can be determined by the onset of the endotherms and compared to the theoretical melt temperature. A good exercise for both TGA and SDT is to perform multiple analyses of calcium oxalate monohydrate. By performing such
FUNDAMENTALS OF TGA AND SDT
3
an analysis the performance and precision of both you and the instrument can be measured. An overlay of five calcium oxalate experiments is shown in Figure 4.
Figure 4. Performance testing using calcium oxalate monohydrate Although calcium oxalate monohydrate is not typically a standard material, it does hold good utility in intra-laboratory analysis. The weight change and peak temperature can be inputted into a spreadsheet program to check your instrument and operators performance. The accuracy of the instrument can be used to assess your instruments long-time performance, and help single out a damaged component of the instrument. The baseline can also be quite usual in quantifying your instrument’s performance and sensitivity. Small weight losses become increasingly difficult to measure if the instrument’s baseline is large compared to that of the instrument. TGA is the foremost analysis technique in determining quantitative properties of the original sample. A polyethylene (PE) sample filled with CaCO3 was analyzed as shown in Figure 5.
Figure 5. TGA curve of polyethylene sample filled with calcium carbonate By knowing the degradation reaction of CaCO3, the initial percentage of CaCO3 in the PE can be calculated. At approximately 550ºC the PE is completely decomposed; thus, the weight loss occurring at approximately 650ºC is due to the decomposition of CaCO3. The weight loss is a direct result of the evolution of CO2 gas. The residue is the remaining CaO that fails to decompose. From the weight change and the residue,
4
WEIBING XU, SEN LI, NATHAN WHITELY AND WEI-PING PAN
the stoichiometric relationships can be used to determine a percentage of CaCO3 that exists in the original PE sample. Calculating the initial percentage of CaCO3 from the weight change is more accurate than calculating it from the residue. Most polymers contain fillers; hence, the residue is a combination of CaO and these fillers making this calculation less accurate. TGA and SDT can also be used to demonstrate the important of reaction atmosphere. Calcium oxalate monohydrate was analyzed under the same experimental conditions except the purge gas. The sample was analyzed in air, CO2, and nitrogen of equal flow rates. Figure 6 illustrates Le Chatilier’s principle.
Figure 6. Le Chatilier’s principle shown using TGA Because the degradation of CaC2O4 produces CO2, the reaction is inhibited when it occurs in a CO2-saturated atmosphere. Figure 7 shows the heatflow data collected with the SDT.
Figure 7. DSC Curve of calcium oxalate monohydrate in multiple atmospheres The CaC2O4 oxidizes in air as shown by the endotherm at approximately 500ºC while in nitrogen and CO2 oxidation does not occur but rather pyrolysis. Hi-Resolution TGA is useful to separate overlapping weight losses. HiResolution TGA exposes the sample to an isotherm once a weight loss is detected. The isotherm allows the weight loss occurring at the lower temperature to complete before the second weight loss begins. Figure 8 shows that as the resolution increases, the two weight losses are more separate and defined.
FUNDAMENTALS OF TGA AND SDT
5
Figure 8. TGA curves at multiple hi-resolution settings Quality control testing often exposes a product to a particular atmosphere for very extended periods of time which can be costly and time consuming. TGA in conjunction with kinetics software can be used to decrease the time and money spent on tedious lifetime testing procedures. A sample is analyzed over the same temperature range using at least four different heating rates. Software is then used to generate numerous plots that can predict the product’s performance over time. The activation energy, rate constant, and other kinetics related information can be provided as seen in Figure 9.
Figure 9. Log[heating rate] curve at multiple conversions Figure 10 shows the lifetime of the sample at varying isotherms.
6
WEIBING XU, SEN LI, NATHAN WHITELY AND WEI-PING PAN
Figure 10. Lifetime plot for polymer sample Although this lifetime plot may not eliminate the need for lengthy quality control testing, it may help predict poorly performing products at an earlier stage in the production process. Two automotive belts composed of alkylated chlorosulfonated polyethylene (ACSM) were tested using TGA to identify the reason why one belt performed at 10% of a normal functioning belt. The belts were analyzed under the exact heating rates under an air atmosphere. The belts each showed the typical degradations profile of a rubber sample as seen in Figure 11.
Figure 11. TG and DTG Curve of Passed and Failed Belt Sample in Air Oil was decomposed first followed by the decomposition of the polymer, and finally the carbon black combusted with the oxygen in the air. This analysis showed that both the oil and polymer portions of the rubber were not the cause of the bad belt’s failure. The decomposition of the bad belt was approximately 20ºC lower than the good belt. Figure 12 shows that the bad belt was composed of carbon black 1, which has the lower decomposition temperature.
FUNDAMENTALS OF TGA AND SDT
7
Figure 12. TG and DTG Curves of Carbon Black Components of Failed Belt Sample TGA and SDT can used be in nearly any application to gather information. TGA and SDT provide a method of analysis that is fast and easy to operate, but provides precise and accurate results. In situations where TGA and SDT cannot be used to study a system directly, TGA and SDT can provide estimations that help alleviate some of the difficulty in using more complicated analysis methods.
Acknowledgements Many thanks to Len Thomas of TA Instruments who allowed use of his short course presentation given at Western Kentucky University.
This page intentionally left blank
An Introduction to the Techniques of Differential Scanning Calorimetry (DSC) and Modulated DSC® Leonard C. Thomas TA Instruments
[email protected]
1. Introduction Differential Scanning Calorimetry or DSC is one of a series of analytical techniques called thermal analysis. These techniques can be used to characterize the physical properties of a wide variety of materials and how they change with temperature. The most frequently used techniques and the properties measured include:
Differential Scanning Calorimetry (DSC) – Heat Flow Thermogravimetric Analysis (TGA) – Weight Change Thermomechanical Analysis (TMA) – Dimensional Change Dynamic Mechanical Analysis (DMA) – Modulus (Stiffness) Rheology – Viscosity (Flow)
DSC is the most important analytical tool because all transitions (change in physical properties) involve heat flow. Endothermic transitions such as melting absorb energy (J/g = Joules/gram) while exothermic transitions such as crystallization release energy. Other advantages of DSC include the ability to use small samples (1-10 mg), analyze solids and liquids, and use short test times (10-30 minutes). Sample preparation is easy and most commercial DSCs offer auto samplers and auto analysis. The utility of DSC for characterizing a wide range of materials can be seen in Table 1 which lists the types of most frequently made measurements (properties) on those materials. Table 1. Frequent DSC measurements TP
TS
EL
CH
PE GL ME
Bio
Glass Transition Temperature (Tg)
Glass Transition Size (ΔCp)
Melting Temperature ™
Crystallization Temperature (Tc)
Crystallinity (J/g not %)
Heat Capacity (J/g°C)
Oxidative Stability (Temp or Time)
Texturing (process) Temperature (°C)
Curing/Degree of Cure (%)
Polymorphic Transitions
Denaturation/Gelatinization TP = Thermoplastics TS = Thermosets EL = Elastomers
CH = Chemical/Drugs PE = Petroleum GL = Glasses
ME = Metals Bio = Proteins/Starches
10
LEONARD C. THOMAS
Heat flow is always the result of a temperature difference between two objects or two points in a single body. With DSC, the difference in heat flow rate between a sample and an inert reference is measured as both are heated or cooled (scanning) in a controlled temperature environment. The temperature difference (ΔT) between the sample and reference changes each time the sample goes through an exothermic or endothermic transition. This ΔT causes a proportional difference in heat flow rate (ΔQ). The magnitude of the heat flow rate is also a function of the thermal resistance (R) and is expressed by:
ΔQ =
ΔT R
In order to obtain the highest level of performance (sensitivity, resolution, single-run measurement of heat capacity, straight baseline, etc.), modern DSC instruments such as the TA Instruments Tzero™ technology also account for heat flow within the components of the DSC. These components have the ability to store heat (thermal capacitance) and transfer heat (thermal resistance). Instead of the simplified equation above, Tzero™ technology uses the following equation in order to more accurately measure the sample heat flow by separating it from heat flow within components of the DSC.
ΔQ =
dTs dΔT ΔT §1 1· - Cs + ΔTo ¨ - ¸ + (Cr - Cs) Rr dt dt © Rs Rr ¹
Where:
ΔQ
= QSample – QReference
ΔT Rr
= Principle Heat Flow Term
§1 1· ΔTo ¨ - ¸ = Imbalance in Sensor Resistance Term © R s Rr ¹
(Cr - Cs)
Cs
dΔT dt
dTs dt
= Imbalance in Sensor Capacitance Term
= Imbalance in Heating Rate During Transition Term
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 11
Figure 1 shows a cross-section of a Tzero™ DSC cell. The cell is the actual measuring chamber which would be located within an environmentally controlled cabinet complete with electronics and a cooling system. The cabinet is interfaced to a computer controller which uses software to perform experiments and analyze the resulting data. Silver Base for Cell Lid #2
Silver Base for Cell Lid #1
Chambers for Temperature Conditioning of Purge Gas Measuring Chamber
Tzero™ Sensor
Furnace
54 Nickel Cooling Rods
Cooling Flange
Figure 1. TzeroTM DSC cell schematic
Figure 2 shows the major components of a modern DSC system.
Figure 2. Modern DSC system
A DSC system usually contains a DSC module, which can have numerous options such as Autosampler, Autolid, and MDSC®, a Refrigerated or LN2 Cooling System and a computer-based controller
12
LEONARD C. THOMAS
2. Typical DSC Measurements
exo
Figure 3 illustrates the most common type of report obtained from a DSC experiment, a plot of endothermic (heat absorbed) and exothermic (heat released) heat flows as a function of the sample's temperature. No one sample would contain all of the transitions shown in Figure 3. It simply is an illustration of the most common types of transitions.
Glass Transition
Crystallization
Melting
Oxidation or Decomposition
endo
HEAT FLOW
Crosslinking (Cure)
TEMPERATURE
Figure 3. Transitions in a DSC curve
A brief definition/description of commonly measured transitions is provided below: Glass Transition
A change in the physical properties of an amorphous material. One of the properties to change is heat capacity, which can be measured by DSC as an endothermic shift in the heat flow baseline as sample temperature is increased. Glass Transition Temperature (Tg)
The temperature, usually a range, over which the properties of an amorphous material change. Below Tg, materials exhibit a glassy, rigid structure; while above Tg, they are rubbery and flexible.
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 13
Figure 4. Glass transition analysis
Crystallization
The conversion of amorphous structure into crystalline structure. Most crystalline polymers have both types of structure and are usually referred to as semicrystalline. Crystallization is normally seen during cooling from a temperature above the melting point, but it can also occur during heating. In this case, it is often called “Cold Crystallization.” Melting
The conversion of crystalline structure to a viscous amorphous structure. Melting occurs over a temperature range in polymers due to the molecular weight distribution and range in crystal sizes and defects within the crystal. With chemicals, the melting range increases and moves to lower temperatures as the impurity level in the sample increases. Although amorphous materials can flow at higher temperatures, due to decreasing viscosity, they do not melt.
14
LEONARD C. THOMAS
Fiure 5. Effect of heating rate on crystallization and melting of quench-cooled PET Cure
A chemical reaction within a material that increases the crosslink density of the material and reduces molecular mobility. The term is usually used to define the reaction that takes place in thermosetting polymers (epoxies, phenolics, etc.). The glass transition temperature of the sample increases as the degree of cure increases. -0.04 -0.08
155.93°C
First Tg
Residual Cure
-0.12 -0.16
Second
Tg
102.64°C 20.38J/g
H
-0.20
t Fl -0.24 (W/ )
0
50
100 150 200 Temperature (°C)
250
300
Figure 6. Thermosets.. Comparision of first and second heat
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 15
Decomposition
The breaking of chemical bonds due to heat or a chemical reaction such as oxidation. Partial decomposition usually results in a decrease in the glass transition and melting temperatures. Other, but less common, measurements are as follows: Oxidative Stability
A measure of the ability of a material to resist a chemical reaction between components of the material and oxygen. Tests are normally performed at elevated temperatures to reduce test time to less than one hour. Pressure DSC is the preferred technique for characterizing samples with volatile components. Reaction Kinetics
Uses mathematical models to analyze the shape and temperature of reaction exotherms to determine kinetic parameters such as activation energy. Purity (Figure 7)
Determines the purity of high purity (> 98%) metals and chemicals. The DSC method of determining purity is based solely on the shape of the melting curve as compared to the shape of the curve for a pure melting material. The higher the impurity level, the lower the melting temperature and the broader the melting range.
Figure 7. Melting temperature decreases and melting range increases as the level of impurity increases
16
LEONARD C. THOMAS
Specific Heat Capacity (Figure 8)
The quantity of heat required to raise the temperature of one gram of a material by 1°K. It is measured in DSC by comparing the endothermic heat flow of an unknown with the endothermic heat flow of a standard material at a specific heating rate. A baseline scan is typically done first, which means that three separate experiments need to be performed. Modern instruments such as Tzero™ DSC and advanced techniques such as Modulated DSC® can measure specific heat capacity in a single run. S am ple: PE T; Q uenched S ize: 16.0000 m g Method: Heat@ 20 C om m ent: Heat@ 20
DSC
File: C:...\Crystallinity\RIqP ETcycle20.001
6
600
275.00°C 530.8J/g
400 Heat Capacity (Single Run) 135.54°C 0.7311J/g
2
200
[ ––––– · ] Integral (J/g)
H eat C apacity (J/g/°C )
4
Running Integral
0
-2 0
50
100
150
200
Temperature (°C)
0 300
250
Universal V3.8A TA Instrum ents
Figure 8. Engineers often need heat capacity information 3. Use of DSC in R&D, Analytical and QC/QA Laboratories DSC in the R&D Laboratory
The goal of R&D is to create new or improved products that provide financial benefit to the corporation. Although we often think that new products improve profitability through increased market share and sales, an equally good way is to lower the cost of manufacturing a product without adversely affecting its end-use performance. The most successful new products are ones that provide the consumer with a better performing product that also has a lower manufacturing cost. Regardless of which approach you are taking, DSC is a valuable tool that will help you achieve program goals. In developing a new product, there are at least three major elements in the development process that need to be considered. 1. Product Formulation and Costs In selecting materials to be used in a product, it is necessary to consider their: a. Cost per unit of product. b. Ability to meet mechanical and any chemical resistance requirements.
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 17
c. d. e.
Aging characteristics: do properties change with time? Manufacturing costs and sensitivity to normal process variations. Environmental impact: are any of the components volatile, and how do you dispose of the product when it is no longer needed?
DSC can measure changes in structure that result from formulation, processing or aging. Before subjecting new materials or formulations to more extensive testing, which often requires more elaborate sample preparation or long-term oven testing, run a quick check with the DSC to see if the formulation is even a candidate for a product. It is sometimes possible to use cheaper materials in a product through the use of additives such as fillers, plasticizers and antioxidants. Although DSC could not measure a change in color, it can determine the relative effectiveness of antioxidant concentrations and measurer the effect of plasticizer or filler level on transition temperatures. 2. End-Use Performance Obviously, the product needs to perform the task that the consumer wishes to satisfy. This means that the product must have the physical and/or chemical properties required for its specific end-use. Those properties, and how they are measured or predicted from DSC data are as follows: •
Mechanical Strength In order to determine if a new or modified material has sufficient mechanical stability, it is important to define the temperature range over which the product could be used. Once that is known, DSC can identify the temperatures at which transitions or phase changes occur, which is also the temperature where physical properties can change by orders of magnitude. For example, if a rubber O-ring is to be used as a gasket between two rigid parts, it needs to remain flexible during end use. If it is rigid, it cannot fill the space between the two rigid parts as they move. By measuring the Glass Transition temperature of the material, it is possible to determine the temperature at which the properties change from rigid to flexible.
•
Mechanical Stability Materials can be amorphous (non-crystalline), crystalline or a mixture of both. Amorphous materials tend to creep or flow over time as stress is applied to them. However, they tend to have better impact properties than crystalline materials. Although crystalline materials are usually more brittle, they have higher modulus or mechanical strength. DSC can measure the relative crystallinity of materials. In order to optimize both mechanical strength and stability, it is often necessary to use blends of two or more polymers or to precisely control processing to achieve an optimum level of amorphous or crystalline structure.
18
LEONARD C. THOMAS
•
Visual Characteristics Color, color uniformity, surface gloss, etc., are affected by the components and the distribution of the components used to make the product. But, they are also affected by thermal or oxidative degradation as well as relative crystallinity. Since sample sizes can be as small as a milligram or as large as tens of milligrams, it is often possible to sample the product to determine if a particular material is susceptible to thermal or oxidative degradation, or tends to undergo phase separation or has surface properties that differ from bulk properties.
3. Processability and Cost Although some materials have high strength as well as high thermal and oxidative stability, it may be necessary to heat them to higher temperatures in order to process them and this costs money. DSC is very useful in helping to determine the suitability of a particular formulation based on the following: a. b. c. d. e.
Maximum processing temperature required. Process time, temperature cooling rate, and total energy requirements. Potential for thermal or oxidative degradation during processing. Environmental issues due to volatilization of components. Product variability due to normal processing variability.
The goal is to produce the best product in the shortest amount of time and at the lowest cost. DSC provides many of the answers required to meet that challenge. Using DSC to determine the effect of a nucleating agent on crystallization time and temperature would be one such example of a possible cost reduction. DSC in the Analytical Laboratory
Many companies use a central laboratory to meet the analytical needs of the company. Although there are pros and cons of having a single, central laboratory, it usually results in a better-trained and staffed laboratory that is a useful resource to materials scientists throughout the company. Support can be provided to R&D, Process Control, Quality Control/Assurance and even Marketing. A sales force loves to have proof statements about why their product is better, and the Analytical Lab can provide them through competitive product analyses. DSC in the Quality Control/Assurance Laboratory
In order to provide consistent product at the lowest possible cost, it is necessary to monitor physical properties of both incoming raw materials as well as outgoing finished product. This way, if a problem occurs, it can be traced back to the material supplier or to the manufacturing process. The procedures used in the R&D laboratory to develop the optimum product can just as easily be used in the QC/QA laboratory. For high value-in-use products, vendor certification through DSC analyses provides a fast and reliable way to help insure product quality.
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 19
4. Introduction to Modulated DSC® (MDSC®)
Although DSC has been an extremely useful analytical technique for over forty (40) years, it has natural limitations with which most thermal analysts are somewhat familiar. These include: •
Baseline straightness limits sensitivity to detect weak transitions.
•
Sensitivity increases with higher heating rates but resolution decreases as heating rate increases.
•
Most modern materials are mixtures of plastics, fillers and additives which have overlapping transitions that are difficult to interpret.
•
Most engineering plastics are semi-crystalline which increase in crystallinity while being heated in the DSC. Because DSC data usually makes the detection of the changing crystallinity difficult to detect, DSC crystallinity values can often be wrong by 50% or more.
•
The measurement of specific heat capacity by DSC is often slow and laborious.
MDSC® overcomes these natural limitations of DSC as will be illustrated. However, MDSC® also has a limitation which results in it being a much slower technique (5-10 times). Therefore, the best approach for characterizing new materials is to always start with DSC and then switch to the MDSC® mode if one or more of its advantages are needed. 5. Operating Principle of MDSC®
With traditional DSC, a linear temperature ramp (heat/cool) is applied as a function of time. The resulting heat flow is a function of the rate of temperature change, absolute temperature, sample mass and the specific heat of the sample.
dH dT = Cp + f (T, t) dt dt Where: dH dt
= Heat Flow Rate (mW or W/g)
Cp
= Sample Specific Heat (J/g°C) x Sample Mass (g)
dT dt f (T, t)
= Heating Rate = Heat Flow Rate due to Kinetic Processes (mW or W/g)
MDSC® uses two simultaneous heating rates, a linear ramp that provides the same information as traditional DSC plus a sinusoidal ramp superimposed on the linear
20
LEONARD C. THOMAS
ramp that provides information about the sample's heat capacity. Figure 9 shows how temperature changes as a function of time in an MDSC® experiment, and Figure 10 provides the time-based derivatives (°C/min) which are the applied heating rates. Although it is beyond the scope of this paper, the applied rates can be selected to provide cooling during the modulation or have heat-only conditions. MDSC® does not require cooling during modulation but does use the change in heating rate to calculate the sample's heat capacity. 62
62
(Heat-Iso) M odulate +/- 0.42 °C every 40 seconds R am p 4.00 °C/m in to 290.00 °C Modulated Temperature
57.0
Amplitude
60
56.5
56.0
56.0
55.5
55.5
55.0
55.0
Modulated Temperature (°C)
Tem perature (°C )
Temperature (°C)
56.5
58
58
54.5 13.70
13.75
13.80
13.85
13.90
13.95
14.00
54.5 14.05
Time (min)
56
56
Average Temperature 54
54
N ote that tem perature is not decreasing during M odulation i.e. no cooling 52 13.0
13.5
14.0
52 15.0
14.5
Time (min)
Figure 9. MDSC average & modulated temperature
10
10
Period
Note That Heating Rate is Never Negative (no cooling)
8
6
6
Average Heating Rate
4
4
2
2
Modulated Heating Rate
0 13.0
13.5
14.0
14.5
0 15.0
Time (min)
Figure 10. MDSC average & modulated heating rate
Deriv. Modulated Temperature (°C/min)
Deriv. Temperature (°C/min)
8
M odulated Tem perature (°C )
57.0
60
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 21
The result of the sinusoidal heating rate is a sinusoidal heat flow as shown in Figure 11. The modulated heat flow signal (MHF) is measured during the experiment and used to calculate the signals used by MDSC® for analysis of material properties. With traditional DSC, there is only one heat flow signal (Total) which is the sum of all heat flows. With MDSC®, there are three primary signals: the Total, Reversing and Nonreversing.
dH dT = Cp + f (T, t) dt dt Total = Reversing + Nonreversing
Figure 11. MDSC raw data signals These three signals are shown in Figure 12 which is a quench-cooled sample of PET. The Total signal is calculated from the average value of the MHF signal while the Reversing signal is calculated from the ratio of the amplitudes of the MHF and modulated heating rate (MHR). The Nonreversing signal is simply the Total minus the Reversing heat flow. All averages and amplitudes are calculated using Fournier transform analysis.
Total = Avg. MHF Reversing =
Amp MHF x Avg. Heat Rate Amp MHR
Nonreversing = Total – Reversing
22
LEONARD C. THOMAS
Figure 12. Calculated MDSC heat flow signals
6. Applications Advantages of MDSC®
As previously stated, MDSC® overcomes the many natural limitations of standard DSC to provide superior sensitivity, resolution and separation of overlapping transitions. These benefits have been well documented in hundreds of papers since the commercialization of MDSC® in 1992. Therefore, only a few of the newer applications will be illustrated here.
Complex Polymer Blends
There is little doubt that blends of semi-crystalline and amorphous polymers are very difficult to characterize by traditional DSC. The reason is due to multiple glass transitions and often several crystallization peaks that occur while heating the sample in the DSC. Figure 13 shows DSC data on a common engineering plastic, Xenoy®, a product of the General Electric company. Xenoy® is a blend of Polybutylene Terephthalate (PBT) and Polycarbonate (PC). Except for the melting peak near 225°C, the results are very difficult to interpret.
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 23
Figure 13. DSC of complex polymer blend Figure 14 shows the same material run with MDSC®. Now it is relatively easy to measure the glass transitions of the PBT and PC and interpret the exothermic peaks near 60 and 150°C in the Nonreversing signal. Once the sample is heated above each of the glass transition temperatures, there is a step-increase in molecular mobility. This increase in mobility allows more of the amorphous PBT to crystallize.
Figure 14. MDSC of complex polymer blend
24
LEONARD C. THOMAS
Analysis of Polymer Crystallinity
Although DSC has been used for more than forty years to measure polymer crystallinity, results are often wrong by 50% or more. The reason is due to the sample's increasing crystallinity as it is being heated in the DSC and the difficulty in identifying the true heat capacity baseline in the data. Figure 15 shows DSC data on a sample of Nylon 6/6. Most DSC users would assume that baseline is best selected as shown in the blue curve. This yields a crystallinity value of approximately 50 J/g as compared to 29 J/g in the same data with the green curve. With standard DSC, it is very difficult to judge which is correct.
Figure 15. DSC @ 10 ºC/min on Nylon 6/6; Where is the baseline? Figure 16 shows MDSC® results on the same Nylon 6/6 and shows the actual crystallinity to be only about 24 J/g.
Figure 16. MDSC of nylon 6/6
AN INTRODUCTION TO THE TECHNIQUES OF DIFFERENTIAL SCANNING CALORIMETRY (DSC) AND MODULATED DSC® 25
Figure 17 shows MDSC® results on a sample which is a mixture of quenchcooled PET and PC. Since it was quench-cooled in liquid nitrogen, the crystallinity of the sample is known to be approximately 0 J/g. The Total curve, which is typical of the data from traditional DSC, is impossible to analyze correctly because the glass transition of the PC is not even visible. The sum of the melting and crystallization processes seen in the MDSC® Reversing and Nonreversing signals provides the expected crystallinity, and the PC glass transition is easily seen in the Reversing signal. Sam ple: Q uenched PET and P C Size: 13.6000 m g DSC Method: MD SC .318/40@ 3 Com m ent: MDS C 0.318/40@ 3; P E T13.60/P C 10.40/A l film 0.96m g
File: C:\TA\D ata\Len\C rystallinity\qPET-PC .002
3 57% PET ; 43% PC MD SC Signals
H eat Flow (m W )
95.13J/g 110.00°C
270.00°C
-1
-3 155.00°C 270.00°C
1
0
-1
-2
[ ––––– · ] R ev H eat F low (m W )
2
[ –– –– – ] N onrev H eat F low (m W )
1
-3
93.60J/g
-5
-4
Initial C rystallinity = 93.6 + (-95.1) = -1.5 J/g
-5 50 Exo Up
100
150
200
250
Temperature (°C)
300 Universal V3.8A TA Instrum ents
Figure 17. MSDC of 57/43 % PET/PC mixture
7. Summary
The combination of DSC and MDSC® provides an extremely useful analytical tool for the characterization of polymer structure and detection of changes (transitions) in that structure. Whereas DSC is a faster, easier to use technique, MDSC® offers advantages in sensitivity, resolution and separation of overlapping transitions.
This page intentionally left blank
Thermal Analysis in Thermoset Characterization R. Bruce Prime IBM (Retired) / Consultant
[email protected] Thermosetting polymers are unique. Unlike thermoplastic polymers, chemical reactions are involved in their use. As a result of these reactions the materials cross-link and become “set”, i.e. they can no longer flow or dissolve. Cure most often is thermally activated, hence the term “thermoset”, but cross-linking materials whose cure is light activated are also considered to be thermosets. Some thermosetting adhesives cross-link by a dual cure mechanism, that is by either heat or light activation. Prime [1] is a general reference for this article. In this paper the distinguishing characteristics of thermosetting materials will be described, followed by a detailed discription of thermal analysis of the cure process, some brief comments on properties of cured thermosets and concluding with a discussion of kinetics including a recent case study. Note that Dynamic Mechanical Analysis of Thermosets is treated in a separate paper. Uncured thermosets are mixtures of small reactive molecules, often monomers. They may contain additives such as particles or fibers to enhance physical properties or reduce cost. Adhesives are probably the most common application of thermosets but they are also found in aerospace, electronic, medical and dental, and recreational materials. The most common thermoset is epoxy, and the most common epoxy resin is the diglycidyl ether of bisphenol-A:
Epoxide group or oxirane ring
The number of repeat units n is usually 0-6, 0 being the monomer. Epoxy resins can cross-link with themselves, referred to as homopolymerization, an example of which is anionic polymerization promoted by imidizoles. But it is more common to cross-link epoxies with a co-reactant such as a diamine. For cross-linking to occur at least one of the reactants must be trifunctional or higher. Epoxy resins are typically difunctional, reacting through the oxirane group, although in some cases reaction can occur through the -OH group. Diamines are four-functional where each amine hydrogen can react. The heat of reaction ΔHrxn for epoxy-amine is ~25½ kcal/mole (~106 kJ/mol) and the activation energy for cure can vary from 10 to 25 kcal/mole (40-100 kJ/mol) depending on the particular epoxy and curing agent. Two distinct phenomena are characteristic of thermoset curing, gelation and vitrification. Gelation is the transformation from a liquid to an elastic gel or rubber and it will always occur in a thermoset. Gelation is abrupt and irreversible and the gel point can be defined as the instant at which the molecular weight becomes infinite [2]. A thermoset is no longer processable above the gel point and therefore gelation defines the
28
R. BRUCE PRIME
upper limit of the work life. For a “Five Minute Epoxy” the five minutes refers to the gel point at room temperature (RT). For example, after the two parts are mixed the user must form an adhesive joint within five minutes before the material becomes rubbery. However, completion of cure requires a much longer time at RT but may be shortened by increasing the temperature. The degree of conversion at the gel point αgel is constant for a given thermoset, independent of cure temperature, i.e. gelation is iso-conversional. Therefore the time to gel versus temperature can be used to measure the activation energy for cure. Gelation does not affect the rate of cure and therfore is not detected directly by DSC but only indirectly if αgel is known. Gelation is detected directly by rheolgy and DMA and because it is a specific point along the reaction path it is determined by the chemical reaction and therefore independent of frequency. Vitrification is a completely distinct phenomenon that may or may not occur during cure depending on the cure temperature relative to the Tg for full cure. Vitrification is the glass transition due to reaction and occurs when the increasing Tg becomes equal to the cure temperature, i.e. when Tg = Tcure. Vitrification can occur anywhere during the reaction to form either an ungelled glass or a gelled glass. It can be avoided by curing at or above Tg∞, the glass transition temperature for the fully cured network. Unlike gelation, vitrification is reversible by heating. Also unlike gelation it causes a shift from chemical control to diffusion control and a dramatic slowing of the reaction. Vitrification is detected by TMDSC and DMA as a frequency-dependent transition. Thermoanalytical techniques include differential scanning calorimetry (DSC), rheology, dynamic mechanical analysis (DMA), thermal mechanical analysis (TMA) and thermogravimetric analysis (TGA). DSC measures heat flow into a material (endothermic) or out of a material (exothermic). Thermoset cure is exothermic. DSC applications include measurement of Tg, conversion α from the area under the exotherm, the reaction rate dα/dt and the heat capacity Cp. Gelation cannot be detected by DSC but vitrification can be measured by modulated-temperature DSC (MTDSC). Rheology measures the complex viscosity in steady or oscillatory shear. In oscillatory shear the advance of cure can be monitored through the gel point and both gelation and the onset of vitrification can be detected. DMA measures the complex modulus and compliance in several oscillatory modes. Gelation and vitrification can be detected, and the cure reaction can be monitored beyond the gel point in the absence of vitrification. Tg, secondary transitions below Tg, creep and stress relaxation can also be measured. TMA measures linear dimensional changes with time or temperature, sometimes under high loading. Measurements include linear coefficient of thermal expansion (CTE), Tg, creep and relaxation of stresses. TGA measures mass flow, primarily in terms of weight loss. Measurements include filler content for inert fillers; weight loss due to cure, e.g. loss of water for condensation reactions; outgassing; moisture sorption and desorption; and thermal and thermo-oxidative stability.
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
29
Figure 1. Schematic, two-dimensional representation of thermoset cure. For simplicity difunctional and trifunctional co-reactants are considered. Cure starts with A-stage monomers (a); proceeds via simultaneous linear growth and branching to a B-stage material below the gel point (b); continues with formation of a gelled but incompletely cross-linked network (c); and ends with the fully cured, C-stage thermoset (d). From Ref. 1. Cure is illustrated schematically in Fig. 1 for a material with co-reactive monomers such as an epoxy-diamine system. For simplicity the reaction of a difunctional monomer with a trifunctional monomer is considered. Reaction in the early stages of cure {(a) to (b) in Fig. 1} produces larger and branched molecules and reduces the total number of molecules. Macroscopically the thermoset can be characterized by an increase in its viscosity η (see Fig. 2 below). As the reaction proceeds {(b) to (c) in Fig. 1}, the increase in molecular weight accelerates and all the chains become linked together at the gel point into a network of infinite molecular weight. The gel point coincides with the first appearance of an equilibrium (or time-independent) modulus as shown in Fig. 2. Reaction continues beyond the gel point {(c) to (d) in Fig. 1} to complete the network formation. Macroscopically physical properties such as modulus build to levels characteristic of a fully developed network. Fig. 3 shows DSC after isothermal cure at 160°C at various stages of cure for a typical epoxy-amine from uncured to fully cured. Note the residual exotherm decreasing and the Tg increasing in step with cure time. DSC at 10°C/min. From Ref. 4.
30
R. BRUCE PRIME
0
Steady State Properties
Conversion
η0
100%
Ge
,
Newtonian liquid
Network at the gel point
Hookean solid
Figure 2. Macroscopic development of rheological and mechanical properties during network formation, illustrating the approach to infinite viscosity and the first appearance of an equilibrium modulus at the gel point. From Ref. 3.
Figure 3.
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
31
Fig. 4 shows conversion-time curves for the same epoxy for cure temperatures from 100° to 180°C [4]. Note that the curves are parallel during the first part of cure. Epoxy-Amine Cure DGEBA-PACM-20 (1:1)
Conversion (α)
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 41, 2885 (1990)
Time (minutes)
Figure 4.
Epoxy Cure
Tg (°C)
5.4°C/% Conversion 90 - 100%
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
DSC Fractional Conversion
(1 - α) In(Tg0) + In(Tg) = (1 - α) +
ΔCp∞ α In(Tg∞) ΔCp0 ΔCp∞ Venditti and Gillham, α J.Appl.Poyl.Sci. 64, 3, (1997) ΔCp0
Figure 5. Fig. 5 shows the Tg - conversion relationship for the same epoxy fitted to the DiBenedetto equation [5]. Note that as cure progresses Tg becomes an increasingly more sensitive measure of cure relative to the residual exotherm. From Ref. 4. Also shown is the Venditti-Gillham equation [6] relating Tg and conversion.
32
R. BRUCE PRIME
Epoxy-Amine Cure DGEBA-PACM-20 (1:1) Tg∞ = 178°C
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
Figure 6. Fig. 6 shows the Tg - time curves for the same epoxy system [4]. Note the similarity to the conversion – time curves. The arrows indicate vitrification. Dynamic mechanical analysis (DMA) measures the complex modulus and compliance as a function of temperature, time and frequency where, for example, •
storage modulus (E', G') which isҏ aҏ measure of stress stored in the sample as mechanical energy • loss modulus (E", G") which isҏ a measure of the stress dissipated as heat • tan į (E"/E' = G"/G') which isҏ the phase lag between stress and strain Properties measured include storage and loss modulus, storage and loss compliance, tan δ, Tg, secondary transitions below Tg, gelation and vitrification and reaction beyond the gel point. DMA of thermosets is covered in a subsequent paper.Gelation is the first appearance of a cross-linked network. It is the irreversible transformation of a liquid to a gel or rubber and it is accompanied by a small increase in the storage modulus. A distinction may be drawn between molecular or chemical gelation (the phenomenon) and macroscopic gelation (its consequence). Chemical gelation as defined by Flory is the approach to infinite molecular weight. It is an isoconversional point (αgel) that is observable as the first detection of insoluble, crosslinked gel in a reacting mixture (sol). Chemical gelation is also defined as the point where tan δ becomes frequency independent [2]. Macroscopic gelation may be observed as the approach to infinite viscosity, the first evidence of an equilibrium modulus, the G' = G" crossover in a rheology measurement, or as a loss peak in fiber and mesh supported systems. Vitrification is distinct from gelation. It is glass formation due to Tg increasing from below Tcure to above Tcure as a result of reaction. It only occurs when Tcure < Tg∞ and begins when Tg = Tcure (the definition of vitrification). Vitrification is reversible by heating: liquid or gel ⇔ glass. It causes a dramatic slowing of rate of cure as a result
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
33
of a shift in the reaction from chemical control to diffusion control. Vitrification is mechanically observable as a large increase in modulus and frequency dependent loss peak (note that vitrification occurs at shorter times with increasing frequency, i.e. it is not iso-conversional. This phenomenon is illustrated in the companion paper on Dynamic Mechanical Analysis of Thermosetting Materials. It is also observable by MTDSC as a step decrease in heat capacity, as demonstrated in Fig. 7 below during the slow heating of an acid anhydride cured epoxy [7]. 1 = heat, 2 = cool). Note that the onset of vitrification at ~100°C results in a diminished rate of cure under diffusion control until cure is complete at ~140°C.
Figure 7. Fig. 8 shows the non-isothermal cure of the same epoxy-anhydride at three heating rates as well as for the fully cured material. Only at the fastest heating rate does cure proceed to completion without vitrification. From Ref. 7, courtesey M. Reading. 1 = 0.2, 2 = 0.4 and 3 = 0.7°C/min. -0.2
2.1 -1 -1
1 1.7
4 -0.1
3
exo
1.3
2 1
-0.0 25
75
heat capacity /Jg K
non-reversing HF / Wg
-1
3 2
125
temperature / °C
Figure 8.
0.9 175
34
R. BRUCE PRIME
Thermomechanical analysis (TMA) measures linear dimensional changes in a material with temperature, time or applied load. Tg and the linear coefficient of thermal expansion (CTE) may be measured as well as irreversible expansion or contraction due to relaxation of stresses on heating through the glass transition. Creep or time-dependent strain under load may also be determined. Fig. 9 shows classical TMA in the expansion mode on heating [1]. CTE (α) may be measured from the slope of the TMA curve below and above Tg or it may be read directly from the derivative DTMA curve. CTE in the rubbery state (α2) is typically ~3x that in the glassy state (α1). Note the similarity of the DTMA curve to heat capacity through the Tg interval. Also note that obtaining a “textbook” curve such as this usually requires a preheat to just above Tg to relieve any residual stress.
Bair, Boyle, Ryan, Taylor and Tighe, SPE [Proc. Ann. Tech. Conf.] 33, 362 (1987)
Figure 9.
Figure 10.
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
35
Fig. 10 shows the thermal expansion and stress relief of a transfer molded integrated circuit (IC) device [8]. Note that the IC device is small enough to fit into the TMA sample chamber. On the first run notice the accelerating expansion as the temperature approaches Tg (~160°C). This is due to the relief of molded in and other residual stresses. From the second run CTE values may be calculated as well as irreversible dimensional change due to the relaxation of stresses. Thermogravimetric analysis (TGA) measures mass flow, ΔW, out of a material (volatility, degradation) as a function of temperature, time and atmosphere. Properties measured include evaporation of volatile components due to outgassing and cure, filler content for inert fillers (carbon/graphite contents can be estimated from nitrogenfollowed-by-air pyrolyses), thermal and thermo-oxidative stability, and degradative weight loss. Fig. 11 shows two-step isothermal TGAs in dry N2 for a UV cured acrylic coating cured at three doses designated “High”, "Typical”, and “Low” [9]. Note how weight loss at 150°C tracks cure dose suggesting that uncured acrylic monomer contributes to outgassing.
Best and Prime, Proc. SPIE - Int. Soc. Opt. Eng. 1774, 169 (1992)
Figure 11. Fig. 12 compares both room temperature and elevated temperature volatility of three acrylic coatings cured with the same “Typical” dose [9]. Coating 1 is from the previous slide. Note that Coating 2 exhibits the greatest RT weight loss but the lowest weight loss at 150°C. The authors attributed the high RT weight loss to greater water sorption capacity for this coating. Similar coatings are used for optical storage compact discs. The same authors showed that uncured acrylate monomers in these coatings will hydrolyze to form acrylic acid which is corrosive to the recording materials. Similar outgassing is also harmful inside hard disk drives. Fig. 13 begins the discussion of kinetics, especially cure kinetics but degradation and aging kinetics may be treated in the same manner [1]. The methodology described here may be characterized as model-free kinetics where the activation energy E is constant. The assumption of a single or overall activation energy applies when the only effect of temperature is to speed up or slow down the reaction. This assumption applies well to most thermoset systems, a notable exception being those that exhibit multiple DSC exotherms. As illustrated below, when E is constant conversion-time curves (or Tg–time curves through the Tg-conversion relationship) will be parallel on a ln(time)
36
R. BRUCE PRIME
plot, allowing construction of master cure or aging curves via time-temperature superposition.
Best and Prime, Proc. SPIE - Int. Soc. Opt. Eng. 1774, 169 (1992)
Figure 12. The shift factor aT is described by the Arrhenius equation where t is the time to constant conversion or constant Tg. Master curves are useful for succinctly summarizing all of the kinetic data and for predicting behavior at times and temperatures that may be of interest. It is recommended that behavior be predicted within the range of temperatures measured but estimates outside these limits can often be useful. Fig. 14. shows the same Tg – ln(time) curves for epoxy-amine cure shown earlier in Fig. 6 [4]. Note the parallel nature of the curves prior to vitrification which is demarcated with arrows. Master Curve T 1 > T2 Conversion (Tg)
aT
ln time aT = t2 / t1 = exp
ln reduced time E(T1 - T2) RT1T2
Figure 13.
t = isoconversional time
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
37
Epoxy-Amine Cure DGEBA-PACM-20 (1:1) Tg∞ = 178°C
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
Figure 14. Below in Fig. 15 is the master curve from shifting the above data along the ln(time) axis using the measured activation energy [4]. This curve clearly shows the reaction under chemical control (solid line) as well as the shift to diffusion control following vitrification. Epoxy-Amine Cure DGEBA / PACM-20 (1:1) Tg∞ = 178°C E = 15.2 kcal/mol, Tref = 140°C
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
at 140°C
Figure 15. Fig. 16 shows DSC conversion – time data for a low modulus adhesive with Tg close to room temperature [10]. One application of this adhesive is to produce bonded joints with low residual stress.
38
R. BRUCE PRIME
120 Conversion (%)
100 -20°C
80
24
60
100 150
40 20 0 1.E-01
1.E+01
1.E+03
1.E+05
Time (minutes)
Figure 16. The data above were shifted along the ln(time) axis by varying the activation energy E in an Excel spreadsheet to create master curve shown in Fig. 17 [10]. In this case E was determined from the best fit of the data. The reference temperature was chosen to be the maximum oven temperature allowed by the process, 120°C. Cure can be seen to be complete in 10 minutes at 120°C. 120
Conversion (%)
100 -20°C
80
24 60
100 150
40 20 0 0
2
4
6
8
10
12
Time at 120°C (minutes)
Figure 17. Fig. 18 shows the same master curve at a reference temperature of 180°C [10]. The question prompting this curve was “What temperature will be required for cure to be complete in 30 seconds?”.
39
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
120
Conversion (%)
100 -20°C
80
24 60
100 150
40 20 0 0
0.1
0.2
0.3
0.4
0.5
0.6
Time at 180°C (minutes)
Figure 18. This paper will conclude with presentation of an actual case study. The full paper [11] will be presented at the NATAS Conference in Albuquerque, NM, September 2003. The subject is a fast curing, two-component polyurethane. Parts are made by mixing the components in-line and rapidly processing and curing. The objective of this study was to determine the kinetic equation for cure for input into process modeling software. Below in Fig. 19 is a typical time-temperature profile for cure of the parts.
Temperature (°C)
200 150 100 50 0 0
1
2
3
4
5
Time (minutes)
Figure 19. Fig. 20 shows the time-temperature profile together with the desired output, the development of conversion along the profile. Following is the path taken to arrive at this endpoint.
40
Temp (°C)
R. BRUCE PRIME
200 180 160 140 120 100 80 60 40 20 0
100 90 80 70 60 50 40 30 20 10 0 0
1
2
3
4
5
Time (minutes)
Figure Temp (°C)20.
S a m p le : 2 0 0 8 4 1 E x p 8 S u p e r M ix S ize : 5 .2 2 0 0 m g M e th o d : C u re ra m p /T g ra m p C o m m e n t: c u re ra m p 1 0 °C /m in , T g ra m p
Conversion
F ile : C :\T A \D a ta \D S C \2 0 0 2 \2 0 0 8 4 1 \2 0 0 8 4 1 .0 0 9 i O p e ra to r: c rm R u n D a te : 1 7 -M a y -0 2 1 4 :4 8
DSC
0 .4
0 .3
cu re ram p , im m e dia te (n o iso the rm a l) @ 1 0 °C /m in S td D SC
8 8 .6 6 °C
Heat Flow (W/g)
0 .2
0 .1
0 .0
-0 .1
3 6 .0 9 °C 1 9 8 .1 J /g
-0 .2
-0 .3 -1 0 0 E xo U p
-5 0
0
50
100
1 50
200
2 50 U niv ers a l V 3.0 G T A Ins tr um ents
Tem perature (°C )
Figure 21. Fig. 21 shows the DSC at 10°C/min of the uncured two-component polyurethane. Note the onset of the cure exotherm near 30°C which necessitated rapid mixing and sample preparation and chilling of the DSC prior to measurement. While a small secondary exotherm was noted near 200°C it was decided to ignore this because it was small and possibly due to errors in mixing. We can now state the objective, which was to experimentally evaluate the parameters of the kinetic equation {E, f(α) and A} where dα/dt = k f(α), k = A exp[-E/RT]
(1)
41
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
and the step-by-step strategy to get there: 1. 2. 3. 4.
determine E from multiple heating rate measurements develop the conversion-time master curve from isothermal measurements determine f(Į) and k from the shape of the master cure curve determine A from the Eq. 1 as the only remaining unknown
The results of Step 1 are shown in the Ozawa plot of peak temperature versus heating rate in Fig. 22. Corrections applied to the raw data yielded an activation energy E of 14.4 kcal/mole. See General Reference for procedural details.
E ~ -R / 1.052 x Slope = 14.1 kcal/mole
Heating Rate (°/m)
100
Serie1 10
Exponencial (Serie1)
y = 9E+09e-7476,9x R2 = 0,9999 1 0,0026
0,0027
0,0028
0,0029
1 / Peak Temp (K)
Figure 22. In Step 2 conversion – time data were obtained by curing samples for various times in the DSC followed by 10°C/min scans to measure the residual exotherm. Conversion was measured from the residual exotherms and the heat of reaction ΔHrxn measured as the average of scans at 5, 10 and 20°C/min on the uncured thermoset. Scans on partially cured thermoset also gave Tg from which the Tg – conversion relationship was constructed. Shown in Fig. 23 is the unshifted conversion – time data.
42
Conversion (%)
R. BRUCE PRIME
100 90 80 70 60 50 40 30 20 10 0
30°C 45 60 80
1
10 ln (time) (minutes)
100
Figure 23. These data were shifted to a reference temperature of 80°C by means of Eq. 2, using the activation energy measured in Step 1. Note that 80°C ≡ 353K. T80°C = tT [E(T-80) / R(T+273)(353)]
(2)
Conversion (%)
The resulting master curve is shown in Fig. 24. 100 90 80 70 60 50 40 30 20 10 0
30°C 45 60 80
0
50 100 Time @ 80°C (minutes)
150
Figure 24. Note that the highest conversion on this master curve is ~90%. To be truly representative values closer to 100% are needed. The difficulty in achieving this with isothermal cures is the interference of vitrification. Since vitrification does not occur in the actual process because the profile temperature quickly rises to above Tg∞ = 104°C, it must also be avoided in the modeling. To accomplish this two data points were obtained from DSC cure profiles which simulated the process. The goal was to achieve one conversion just below 90% to overlap with the above master curve results and the other between 95 and 100%. To accomplish this DSC profiles were designed which would give equivalent isothermal times at 80°C (EIt80°C) of ~35 and ~125 minutes, where EIt is computed by summing along the time-temperature profile as indicated in Eq. 3 below.
43
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
EIt80°C = ¦t80°C = ¦ti [E(T-80) / R(T+273)(353)]
(3)
130° Profile: EIt = 36 minutes at 80°C
Temperature (°C)
160° Profile: EIt = 124 minutes at 80°C 180 160 140 120 100 80 60 40 20 0 0
2
4
6
8
10
Time (minutes)
Figure 25. Fig. 25 shows the DSC profiles together with their respective EIt computations. Samples were cured according to these profiles and their conversions determined from 10°C/min scans. These results were added to the master curve to give a Version 2 master curve, shown below.
Conversion (%)
100
30°C
80
45
60
60
40
80
20
130° Profile 160° Profile
0 0
50
100
150
Time @ 80°C (minutes)
Figure 26. In Step 3 data from the Version 2 master curve are analyzed to determine f(α). The data appear to have a general nth order shape. The cure is clearly not autocatalytic, missing the characteristic inflection. The general form of the nth order equation is dα/dt = k f(α) = k (1-α)n
(4)
where k is the rate constant and n is the order of the reaction. The data were fit to 1st and 2nd order forms of the nth equation shown below
44
R. BRUCE PRIME
1st order, n=1: -ln(1-α) = kt
(5)
2nd order, n=2: 1/(1-α) = 1 + kt
(6)
The data exhibited a very poor fit to the 1st order equation but an excellent fit to the second order equation, Eq. 6, with a high correlation coefficient as shown in Fig. 27. From the above 2nd order equation the slope, 0.235, is the rate constant at 80°C. y = 0,2348x + 1 R2 = 0,9818
35
1/(1 - alpha)
30 25 20
Serie1
15
Lineal (Serie1)
10 5 0 0
50
100
150
Time @ 80°C (minutes)
Figure 27. Step 4 may now be addressed. With E, f(α) and k80°C known the pre-exponential factor can be computed from the Arrhenius equation, Eq. 1, as 2.88E-10, providing a complete mathematical description of cure. From this kinetic equation the development of conversion along the profile shown initially was determined. The EIt80°C for the profile was computed to be 115 minutes from which a conversion of 96.5% may be estimated. The kinetic equation for cure also allows the computation of the master curve as shown in Fig. 28 below.
Conversion (%)
100 30°C
80
45 60
60
80
40
130° Profile 160° Profile
20
Kinetic Eqn
0 0
50
100
Time @ 80°C (minutes)
Figure 28.
150
THERMAL ANALYSIS IN THERMOSET CHARACTERIZATION
45
References 1.
R. B. Prime, Chapter 6 “Thermosets” in Thermal Characterization of Polymeric Materials (E. A. Turi, ed.) Academic Press, San Diego (1997). 2. H. H. Winter, Polym. Eng. Sci. 27, 1698 (1987). 3. H. H. Winter, et al. in Techniques in Rheological Measurement (A. A. Collyer, ed.) Chapman and Hall, London (1997). 4. G. Wisanrakkit and J. K. Gillham, J. Appl. Poly. Sci. 42, 2453 (1991). 5. A. T. DiBenedetto, J. Polym. Sci., Par B: Polym. Phys. 25, 1949 (1987). 6. R. A. Venditti and J. K. Gillham, J. Appl. Polym. Sci. 64, 3 (1997). 7. B. Van Mele et al., Thermochim. Acta 266, 209 (1996). 8. H. E. Bair, D. J. Boyle, A. L. Young, and K. G. Steiner, Soc. Plast. Eng. [Proc. Annu. Tech. Conf.] 33, 262 (1987). 9. M. A. Best and R. B. Prime, Proceed. SPIE Int. Soc. Opt. Eng. 1774, 169 (1992). 10. R. B. Prime, unpublished data. 11. R. B. Prime, C. Michalski and C. M. Neag, Proceed. 31st NATAS Conference, Albuquerque, NM, (2003).
This page intentionally left blank
Characterization of Pharmaceutical Materials by Thermal Analysis Leonard C. Thomas TA Instruments 109 Lukens Drive, New Castle, DE 19720, U.S.A.
[email protected] 1. Introduction Thermal analysis has been an extremely important analytical tool within the pharmaceutical industry for more than forty (40) years. Although the technique could easily be classified as mature, recent advances in Differential Scanning Calorimetry (DSC) have generated renewed enthusiasm in thermal analysis from pharmaceutical scientists. These new developments include Tzero DSC™ and Modulated DSC (MDSC®) which provide significantly improved performance in critical areas such as sensitivity, resolution and separation of complex transitions. This paper will illustrate the use of these improved DSC technologies on characterization of a wide variety of pharmaceutical materials including amorphous and crystalline drugs, drug delivery systems such as tablets and biodegradable polymer microspheres, proteins and frozen solutions used for freeze-drying. 2. Recent Advances in DSC Technology A brief description of the new technologies is provided to help explain how the improved performance is obtained over tradition DSC instrumentation. 2.1.
Modulated DSC®
An MDSC® experiment is performed on the same instrument as used for traditional DSC measurements. The difference between the two techniques is in the temperature profile applied to the sample and the deconvolution (separation) of the resulting heat flow signal into several components. Instead of the simple linear temperature change used by DSC, MDSC® uses two simultaneous heating rates; an average or underlying rate similar to DSC plus a sinusoidal or modulated heating rate. The average rate provides information equivalent to traditional DSC while the modulated heating rate provides unique information about the sample’s heat capacity. Figure 1 shows how temperature changes with time in an MDSC® experiment.
48
LEONARD C. THOMAS
Figure 1. As stated previously, the reason for applying simultaneous heating rates is to create additional information about the heat capacity or structure of the material. A brief examination of the equation used to describe the heat flow signal from a DSC or MDSC® experiment shows the benefit of the dual heating rates.
Where:
dH dT = Cp + f (T, t) dt dt
dH dt
is the Total heat flow due to the underlying or linear heating rate
Cp
is the Heat Capacity Component of the Total heat flow and is calculated from just the heat flow that responds to the modulated heating rate
dT dt
is the measured heating rate which has both an average (linear) and amplitude (modulated) component
f (T,t)
is the Kinetic Component of the Total heat flow and is calculated from the difference between the Total and Heat Capacity component.
Cp
dT dt
is the Reversing Heat Flow Component of the Total Heat Flow
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
49
Traditional DSC provides a single signal which is the sum of all thermal events occurring within the temperature range of the experiment. This often makes it difficult to interpret data or detect small transitions. MDSC® has a significant advantage over traditional DSC in that it measures both the Heat Capacity Component and the Total, and obtains the Kinetic Component from the difference. Separation of complex transitions into specific components greatly improves interpretation of results. In general, MDSC® provides the following advantages over traditional DSC.
increased sensitivity increased resolution separation of complex transitions more accurate measurement of crystallinity in semicrystalline materials direct measurement of heat capacity, either while programming temperature or holding it isothermal
Several of these benefits are illustrated on pharmaceutical materials later in this paper. 2.2.
Tzero DSC™ Technology
Until the recent introduction of this new approach to measuring absolute values of heat flow, DSC technology had not changed in a significant way since its commercialization in the mid-1960s. That technology was based on a single differential measurement and used either a heat-flux or power-compensation approach. The performance limitations of those early technologies could be seen in baselines that were neither flat nor very reproducible, and in peak-widths (resolution) that were much greater than expected from the melting of pure metals. Tzero DSC™ technology provides very significant improvements in baseline performance, sensitivity and temperature resolution of transitions. The improved performance of Tzero DSC™ results from a cell design which produces two simultaneous differential measurements and provides for the ability to calibrate the thermal resistance and heat capacitance of the individual sensors as a function of temperature. By knowing the actual thermal characteristics of a specific cell, any imbalance in capacitance or resistance of the sensors can be accounted for in the calculation of the absolute heat flow signal. The improved resolution of Tzero DSC™ technology is seen in Figure 2, a comparison of an indium melt with traditional and Tzero DSC™ technologies.
50
LEONARD C. THOMAS
Figure 2.
Tzero DSC™ provides a much more accurate heat flow signal than was previously available. Because of this, heat capacity values can be measured in a single run versus the three runs required for traditional DSC. This kind of performance results directly from the much more complete calibration of the physical components of the system and use of those calibration values in calculating the heat flow due to just the sample. Calibration factors are measured versus temperature and then continuously applied to the four-term heat flow equation used to calculate the sample heat flow.
q=−
§ 1 ǻT 1 + ǻT0 ¨¨ Rr © Rr Rs
· dT dΔT ¸¸ + (C r − C s ) s − C r dIJ dIJ ¹
Where: q
ΔT
= sample heat flow = qs - qr = temperature difference between sample and reference
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
ΔTo thermocouple
51
= temperature difference between sample and Tzero located between sample and reference sensors
−
ǻT Rr
= principle heat flow
§ 1 1 · = term to account for any imbalances in thermal Resistance ǻT ¨ - ¸ between 0 ¨© R r R s ¸¹ sample and reference sensors
R
(C r − C s )
dTs = term to account for any imbalance in heat Capacitance between the sensors dIJ
C
− Cr
= thermal Resistance of sample or reference sensors
dΔ T dIJ
= heat Capacitance of sample or reference sensors
= term to account for the difference in heating rates between the sensors (T4) and between the sample pans (T4P) during a transition in the sample
The advantages of Tzero DSC™ technology are illustrated on pharmaceutical materials in the applications section of this paper. In general, benefits fall into five areas:
flat and reproducible baselines higher sensitivity higher resolution single run measurement of heat capacity higher heating rate MDSC®
3. Pharmaceutical Applications
Because all transitions in materials involve the flow of heat (into the sample during endothermic events and from the sample for exothermic events), DSC is the universal detector for measuring a wide variety of transitions in pharmaceutical materials. This paper will focus on some of the most common measurements and illustrate the superior performance of Tzero DSC™ and MDSC® technology. These applications include measurement of:
52
LEONARD C. THOMAS
•
Amorphous Structure − Glass transition − Detection of amorphous material in semi-crystalline compounds
•
Crystallinity − Melting and crystallization − Purity − Polymorphs
•
Drug/Excipient Interaction
•
Protein Denaturation
•
Freeze Drying
•
Miscellaneous / Complementary Thermal Analysis Techniques
4. Amorphous Structure
The physical properties of amorphous structure are quite different from crystalline structure. Major differences include dissolution rate (faster bioavailability), storage stability and hygroscopicity, the tendency to absorb moisture or other solvents. It is, therefore, important to know if a drug or drug delivery system has an amorphous component. The most common DSC measurement of amorphous structure is the measurement of the glass transition. It is important to know both the size of the transition in heat flow or heat capacity units and the temperature (Tg) at which it occurs. The size of the transition provides quantitative information about the amount of amorphous structure in the sample, and the temperature identifies the point where there is a dramatic change in physical properties. Below the glass transition temperature there is limited molecular mobility while above there is high mobility that results in much lower viscosity and potentially much greater chemical interaction between components. Because of this, there is a general desire to store samples at least 40°C below their glass transition temperature. Since amorphous materials are often hygroscopic and because small amounts of moisture or solvent act to plasticize (lower Tg) the sample, it is important to measure the actual Tg of drug formulations as well as to control their volatile content. Figure 3 shows the glass transitions of an amorphous sucrose sample that was exposed to lab air (approx. 50% RH) for about thirty minutes. The first heat shows the midpoint of the glass transition centered near –28.70°C while the second heat to 100°C shows that it has increased by nearly 40°C to 11.8°C. Even this sample still has several percent moisture since the glass transition of completely dry sucrose is nearly 70°C.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
53
Figure 3.
Figure 4 is an MDSC® experiment that shows how the size of the glass transition increases with increasing amounts of amorphous structure. The sample of Polyethylene Terepthalate (PET) was first quench cooled to produce a 100% amorphous structure then cooled at slower-and-slower rates to produce increasing amounts of crystalline structure. Even with a cooling rate of 0.2°C from above the melting temperature, the material retains a large amorphous component. To quantify the percentage of amorphous phase, the size of the glass transition (0.14 J/g°C) is divided by the size of the glass transition for a 100% amorphous sample (0.35 J/g°C).
% Amorphous Phase =
0.14 x 100 = 40% 0.35
Although this is a good approximation of the amorphous content of the sample, the actual content is probably slightly higher. Amorphous material that is sometimes trapped within crystalline lattices, often called the rigid amorphous phase, does not contribute to the step change in heat capacity at the glass transition and is therefore undetected.
54
LEONARD C. THOMAS
Figure 4.
One of the most difficult measurements for DSC is the detection of small amounts (<5%) of amorphous material in highly crystalline samples. The transition is small and is often hidden by small variations (nonlinear) in the DSC baseline. Figures 5 and 6 show the results of the outstanding baseline obtained with a Tzero™ DSC on a sample of crystalline sucrose that has less than one percent amorphous phase. Figure 5 shows duplicate runs on a very small (180μg) sample of freeze-dried amorphous sucrose. The value of the second heat is to not only check reproducibility but also to verify that the sample is dry. A wet sample with even a few percent moisture would have a lower Tg the first time that it is heated in a crimped (not hermetically sealed) pan. These runs are essentially calibration runs for determining the weight of amorphous material in another sample based on the size of the glass transition.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
55
Figure 5.
Figure 6 shows an overlay of the data from Figure 5 along with three other experiments. The first experiment was on a relatively large (15mg) sample of sucrose that was thought to be 100% crystalline. A large sample was used to increase the sensitivity of the measurement in detecting small amounts of amorphous content. At the expected glass transition temperature, a very small step change of 8μW is detected. Comparing this change with that of the 100% amorphous sample permits the calculation of the amount of amorphous structure in the crystalline sample. 8.4μW 24.6μW = where X = 61μg X 180μg 61μg x 100 % Amorphous Sucrose = = 0.4% 15,000 μg In order to verify that there was a small amount of amorphous material in the crystalline sample, the technique of “standard addition” was applied where a known quantity (80μg) of amorphous material was added to a known quantity (16,000μg) of the crystalline sample. Based on the amount of amorphous material added, a step change of 10.9μW would be expected if there were no amorphous material in the original crystalline sample.
80μg x 24.6μW = 10.9μW 180μg
56
LEONARD C. THOMAS
Actual results on duplicate runs shown in the middle of Figure 6 show a step change of 17.3μW. This equates to a weight of: 24.6μW 17.3μW = where X = 126μg 180μg X % Amorphous Sucrose =
126μg x 100 = 0.8% 16,080μg
Since only 0.5% was added, the original crystalline sample must have contained 0.3% which agrees quite well with the 0.4% measured directly.
Figure 6.
For many samples, it is often difficult to detect the glass transition by DSC even when the sample has a high amorphous content. This is due to interferences from other transitions that occur over the same temperature range. The Total signal in Figure 7 (which is equivalent to a standard DSC signal) is almost uninterpretable due to numerous transitions between room temperature and 150°C. Because this was an MDSC® experiment, the transitions are separated into the Reversing and Nonreversing signals and can be more easily interpreted as shown. The
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
57
Reversing signal, which is just the heat capacity component of the Total signal, is extremely useful for measuring glass transitions in all types of difficult samples.
Figure 7.
5. Crystallinity
Unlike glass transitions that are often hard to detect, endotherms associated with material melting are relatively high in energy (J/g) and easily seen. However, this does not mean that it is always easy to measure crystallinity by DSC. The DSC user must constantly be aware of other transitions that appear as endothermic peaks and can be misinterpreted as melting. For example, the endothermic peak between 40 and 100°C in Figure 8 is the result of water evaporation from the pinhole pan as the water molecule is lost from a monohydrate form of a drug. For this material, which is highly (>99%) crystalline according to x-ray diffraction results, the crystal structure is also lost as the water evaporates. The resulting amorphous material crystallizes near 122°C and melts at 174°C. Since most endothermic transitions that can be confused with melting are kinetic events (evaporation, decomposition, and enthalpic recovery at Tg), it is relatively easy to distinguish between melting and these other transitions. This is done by changing the heating rate over the range of 1 to 20°C/min. The onset of a true melting peak will shift
58
LEONARD C. THOMAS
very little (<1°C) with heating rate while evaporation and decomposition peaks will shift by 10°C or more.
Figure 8.
Figure 9 shows how the melting peak of Phenacetin changes with heating rates of 1, 5 and 20°C/min. The shift in the peak onset is only 0.3°C. Peak temperature and width do increase with heating rate but the onset of a true melting transition will change only slightly.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
59
Figure 9.
A very different result is obtained on Ciprofloxacin Hydrochloride at the same three heating rates as seen in Figure 10. The onset of the endotherm shifts by nearly 30°C. This means that the endotherm is really decomposition and not melting. Again, the onset of true melting shifts very little with heating rate when using aluminum sample pans, either crimped or hermetic. Acetaminophen is an interesting material in that most pharmaceutical grade samples are usually completely crystalline but easily converted to a completely amorphous structure by cooling at rates of 20°C/min or higher from above the melt. In addition, the crystal structure can exist in different forms called polymorphs, which is discussed later in this section. Figure 11 shows the first heat on the as-received sample and the second heat after the sample had been cooled at 20°C/min from 200°C. The first heat shows no glass transition or cold crystallization peak indicating it was highly crystalline. After cooling the sample at 20°C/min from 200°C, the second heat shows a large glass transition and cold crystallization peak indicative of amorphous structure. The melting peak is still very sharp, indicating no decomposition but it has shifted to a lower temperature typical of a less stable polymorph.
60
LEONARD C. THOMAS
Figure 10.
Figure 11.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
61
Most pharmaceutical drugs will not recrystallize in the solid state once they are completely melted. In addition, a high percentage decompose while melting. •
Calorimetric Purity DSC can be used to measure the absolute purity of some crystalline compounds with very high sensitivity for detecting even small amounts (±0.01%) of impurity. This is due to the melting point depression caused by the impurity which lowers and broadens the temperature range of melting. The effects of 0.7 to 5% mole fraction p-Aminobenzoic Acid on the melting point of Phenacetin are shown in Figure 12. The materials form a eutectic mixture that melts near 113°C and the melting peak of Phenacetin broadens considerably at all concentrations of impurity.
Figure 12.
The calculation of absolute purity is based on the Van’t Hoff equation:
§ R X To 2 · 1 ¸ Ts = To - ¨¨ ¸ © ΔH f ¹ F
62
LEONARD C. THOMAS
Where:
Ts
=
sample temperature
To
=
calculated melting point of 100% pure crystalline sample
R
=
gas constant (1.987 cal/mol K)
X
=
total mole fraction impurity
F
=
fraction melted at Ts
An example of applying the Van’t Hoff equation (software program) to a sample of -0.8 Phenacetin is seen in Figure 13. Based on the Van’t Hoff equation, a plot of Ts 125.20°C versus 1/F should be a straight line. An iterative process of small corrections is137.75°C135.0 made to linearize the plot and provide the intercept To. -1.0
Heat Flow (W/g)
-1.4
-2.0
Purity: 99.53mol % Melting Point: 134.92°C (determined) Depression: 0.25°C Delta H: 26.55kJ/mol (corrected) Correction: 9.381% Molecular Weight: 179.2g/mol Cell Constant: 0.9770 Onset Slope: -10.14mW/°C RMS Deviation: 0.01°C
-1.8
134.0
-1.6
134.5
133.5
Total Area / Partial Area -2.2 122 Exo Up
-2 124
0 126
2 128
4
130
6
132
8
134
10
136
Temperature (°C)
Figure 13.
The DSC purity technique has several advantages including: • • •
Fast: less than 30 minutes Uses small samples: typically 1 mg Does not require a 100% pure sample of the material to be analyzed
138 133.0
Temperature (°C)
-1.2
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
63
However, there are limitations that must be considered as well, including: • • • • •
Purity should be greater than 98% Sample cannot decompose during melting Impurity cannot form a solid-solid solution; must be insoluble in solid and soluble in melt Does not provide the identity of the impurity
Polymorphs Some materials can exist in multiple crystal forms called polymorphs. They have the same chemical structure but a different crystalline structure which can result in significant differences in physical properties such as solubility, bioavailability and storage stability. The most stable form typically has the lowest dissolution rate and may not be the ideal form for a particular application. For all of these reasons, plus others associated with the development and manufacture of efficient and effective drug delivery systems, it is important to know if a specific compound can and does exist in different polymorphic forms. DSC is the most widely used analytical technique for measuring crystallinity and crystalline polymorphs. However, the results are often misinterpreted by the novice user who fails to realize that the sample may be changing as it is heated. The fact that it can change means that kinetic processes are involved. These include crystallization of amorphous material (as seen during the second heat of Acetaminophen in Figure 11) and conversion of less stable polymorphic forms into more stable forms that melt at a higher temperature. Two techniques that can be used to better understand what is happening to the sample as it is heated are Modulated DSC® and multiple heating rate DSC. The benefit of MDSC® can be seen in a comparison of Figures 14 and 15 for a polymer microsphere with approximately 30% drug. The standard DSC data in Figure 14 was run at a relatively low heating rate of 5°C/min to optimize resolution of multiple transitions. It is nearly impossible to interpret the data. Figure 15 is MDSC® data on the same material. The Reversing Heat Flow signal shows a very clear glass transition at about 30°C and two melting peaks between 125 and 175°C. Since melting happens after the cold crystallization exotherm in the Total signal, the sample was primarily amorphous. Other transitions that complicate the Total signal of DSC include enthalpic recovery at the end of Tg, evaporation of about 2% volatiles (from TGA) and crystallization of the amorphous drug just above 100°C.
64
LEONARD C. THOMAS
Figure 14.
Figure 15.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
65
The benefit of multiple heating rates will be illustrated on several drugs. The first is sulfanilimide which is reported to have three polymorphic forms. These are easily detected as seen in Figure 16 which is a comparison of data created at 1 and 10°C/min. Note that heating rate has very little effect on the number and shape of the melting peaks except for the slight broadening of the large peak at 165°C. This means that each polymorphic form is relatively stable and does not transform from one form to another during the experiment. It is very easy to characterize the relative amount of each form in this kind of sample.
Figure 16.
The second sample to be analyzed with multiple heating rates is a drug monohydrate. The data in Figures 17 and 18 show data at 10 and 1°C/min respectively. Note that both experiments were performed with hermetic (sealed) pans to prevent evaporation of the water (5% by weight from TGA) from the hydrate. The importance of this will be illustrated a little later. Figure 17 shows the data from the 10°C/min experiment at two sensitivities in order to illustrate some of the finer points. The slight step at the leading edge of the melt is not a glass transition. This was verified by MDSC® data which was heated and cooled over this temperature range. In addition, the glass transition of the amorphous
66
LEONARD C. THOMAS
drug is near 50°C. The baseline shift from the beginning to the end of the melt is caused by the higher heat capacity of the liquid phase as compared to the solid phase. Because of this step, the most accurate way to integrate the peak is with a sigmoidal baseline. The most important information from this scan is that there is only a single melting peak (one polymorph). Sample: Drug A Monohydrate Size: 1.7800 mg Method: DSC@10 Comment: DSC@10; 2heats
Figure 17.
Figure 18 is the same material run at 1°C/min. The shape of the end of the melt is slightly different, plus there is an additional melting peak near 160°C. At the slower heating rate, a small amount of the material has time to change into another polymorphic form which melts at a higher temperature. Data shown in Figure 19 appears to be from a totally different sample than the data from Figures 17 and 18; however, it is the same material. The huge difference in the results is simply the result of using an unsealed versus sealed pan. Whereas Figures 17 and 18 were created with sealed hermetic pans, Figure 19 used a hermetic pan with a pinhole in the top. This pinhole allowed water (from the hydrate) to escape from the pan which also caused the conversion of the crystalline material into an amorphous form. The amorphous form crystallized near 120°C and then melted near 174°C. In a sealed pan with water (5%) present, the sample showed very little tendency to crystallize after it had melted.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
Sample: Drug A Monohydrate Size: 1.8200 mg Method: DSC@1 Comment: DSC@1; Hermetic pan
Figure 18. Sample: Size: Method: Comment: / i h l
Drug A Monohydrate 1.1400 mg DSC@10 DSC@1; Hermetic pan
Figure 19.
67
68
LEONARD C. THOMAS
Figure 20 is a comparison of Figures 19 and 20 which were both run at 1°C/min with the only experimental difference being the sealed versus unsealed pans.
Effect of Hermetic vs. Non-Hermetic Pan on the Melting of Drug A Monohydrate
Figure 20.
There is an important point to be learned from this data on the drug monohydrate. The presence of moisture or solvents can have a significant effect on DSC results. Therefore, always run thermogravimetric experiments on new samples to determine the temperature and amount of weight loss. When it exceeds about 0.5%, always compare the effect of sealed versus unsealed pans on the results. Use the type of pan that provides the most meaningful information on the properties of the material. The third and last polymorphic drug to be characterized at multiple heating rates is anhydrous and contains less than 0.05% volatiles (from TGA). Therefore, it was run in standard crimped aluminum pans which are not sealed. This example best illustrates the value of using multiple heating rates to characterize the ability of the drug to convert from one polymorphic form to another. In general, start with a heating rate of 10°C/min. If only a single melting peak is detected, then there is probably no need to use other conditions. If multiple peaks or shoulders on the major peak are seen, then a lower heating rate experiment should be performed to see if overlapping peaks can be separated or additional peaks form.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
69
Figure 21 shows the data for the anhydrous drug at 10°C/min. There are clearly two peaks and the data might be integrated with a perpendicular drop from the baseline to try to quantify the amount of each polymorph.
Figure 21.
However, the results would be totally wrong as seen from the data in Figure 22 which is the same material but heated at only 1°C/min. In this case, there is an additional peak at 175°C and a very different ratio of peak sizes for the peaks near 155 and 161°C. In both experiments, the total energy of melting is the same (65 J/g), but it is distributed very differently among the various polymorphic crystal forms. Since this sample changes significantly with lower heating rates, it is necessary to use higher heating rates (50 - 100°C/min) to minimize the time and opportunity of one polymorphic form to convert to another as will be illustrated.
70
LEONARD C. THOMAS
Figure 22.
Figure 23 shows data at 50°C/min heating rate with the results from the 1 and 10°C/min experiments overlaid for comparison. At the high heating rate only a single melting peak is seen which means that there is only one polymorphic form in the original sample. Since it begins to melt near 153°C, as also seen in the 1 and 10°C/min data, the crystal form in the original sample is the lowest temperature (least stable) polymorph. One concern at higher heating rates is the loss of resolution. A major benefit of the new Tzero DSC™ technology is the higher resolution provided by the T4 and T4P heat flow signals which account for thermal lags that occur due to the sensors and pans. The higher resolution results often make the difference between seeing or not seeing a small amount of one polymorph in a mixture of other polymorphs. Figure 24 is a comparison of the T4P signal of a Tzero DSC™ with the traditional one-term (T1) signal of conventional DSC. The polymorph detected at 170°C was verified to be real by hot-stage microscopy results. The lower resolution signal of conventional DSC was not able to detect it at the high heating rate of 50°C/min while lower heating rates could not be used due to the sample changing during the experiment.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
Figure 23.
Figure 24.
71
72
LEONARD C. THOMAS
6. Drug Delivery System Using Polymer Microspheres
Until now, we have focused on DSC analysis of individual amorphous and/or crystalline drugs. We will now apply what we have learned from those experiments to a much more complex sample consisting of amorphous and hydrated crystalline drug dispersed in biodegradable polymer microspheres of 50 – 200 microns. TGA results (Figure 25) show the sample loses just over 2% weight by 150°C. Therefore, it is necessary to use hermetic pans for the DSC experiments. The next step in determining the crystalline content of the drug in the microspheres is to run at different heating rates to see if the sample undergoes polymorphic transformation at low heating rates.
Sample: Drug A Microspheres Size: 16.8110 mg Method: TGA@10
Figure 25.
Figure 26 is a comparison of the Total Heat Capacity signals from experiments run at 1, 10 and 50°C/min. Results show that only at 50°C/min is a single melting peak obtained. This means that the sample must be run at 50°C/min in order to measure the original crystalline form of the drug instead of the other polymorphs that form at slower heating rates.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
73
Effect of Heating Rate on Polymorphic Conversion in Drug A Microspheres
Figure 26.
At 50°C/min in Figure 27, the first heat shows a total heat of fusion of 12.58 J/g with 97% of that coming from the peak near 115°C. Since the pure drug has a heat of fusion of about 98 J/g, this means that the microspheres contain about 13% (12.58/98) crystalline drug. To confirm reproducibility of the measurement at 50°C/min heating rate, the sample was run in triplicate with excellent results shown in Figure 28. In addition to crystalline drug, the microspheres contain an even higher concentration of amorphous drug. However, all of the data shown in Figures 27 and 28 show only a single glass transition near 35°C. This means that the amorphous drug and the amorphous polymer of the microspheres are completely miscible and it is not possible to measure the amount of amorphous drug in the sample by DSC; another approach such as TGA is needed.
74
LEONARD C. THOMAS
Comparison of Data From Three Experiments on Drug A Monohydrate Microspheres
Figure 27.
Figure 28.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
75
TGA data of placebo microspheres is shown in Figure 29. It shows that the polymer microspheres (no drug) are essentially fully decomposed by 400°C where the rate of weight loss is 0.04%/min. This is in contrast to the microspheres with the drug (Figure 25) which are still losing weight at a relatively high rate (0.78%/min) due to the ongoing decomposition of the drug. Since all of the drug is amorphous by 400°C, it is possible to calculate the total drug loading from a ratio of the rates of weight loss if the rate of weight loss for a 100% drug sample is known. Although not shown, a pure drug sample showed a rate of weight loss of 2.24%/min at 400°C. Therefore:
% Total Drug =
0.78 - 0.04 = 30% 2.24
The target-loading was 32% and so there is good agreement. Since the sample was known to have 13% crystalline drug from the DSC data, it must have had 17% amorphous drug as well. Since the purpose of the microspheres is to provide a controlled rate of drug release into the body and since amorphous and crystalline drugs have different dissolution rates, it is not surprising that the microspheres were formulated with both amorphous and crystalline drug.
Figure 29.
76
LEONARD C. THOMAS
7. Drug-Excipient Interaction
Actual drug dosage forms are seldom just the pure drug or protein. Instead, they are usually composed of multiple ingredients that aid in the manufacture, storage or delivery of the active ingredient. Because the dosage form must be stored over a period of time at some temperature and relative humidity, there is a need to confirm that the efficacy of a drug formulation will not change with time. Two ways used by pharmaceutical companies to improve storage stability are to keep the sample below the glass transition temperature of any ingredient (minimizes molecular mobility and, therefore, possible chemical interaction) and to use crystalline drugs that are more thermally stable than amorphous drugs. It is usually the responsibility of the drug formulations group to determine if the drug will interact with any of the other ingredients (excipients) in the final formulation. This is often a very tedious task where multiple samples must be stored under different conditions for long periods of time and tested regularly with a variety of analytical techniques. DSC has proven to be an excellent tool for detecting drug-excipient interaction. A “finger print” of the fresh formulation is made and then compared to aged samples to look for differences in the transitions of any ingredient especially the active compound. For crystalline drugs that melt without decomposition, this is a relatively easy measurement because the peak area for the melt is a quantitative measure of the crystalline drug. For amorphous materials that are often miscible with other amorphous excipients or for crystalline drugs that decompose instead of melt, the measurement of drug-excipient interaction by DSC is often much more difficult. Just as with many of the previous examples, always start the analysis of new materials with TGA. This will save a lot of time in the end and help provide the correct interpretation of many transitions. Figure 30 shows TGA results on a Cold/Allergy tablet that used a crystalline drug as the active ingredient. Analysis of the weight and derivative curves show the sample contains more than 1% volatiles and starts to slowly decompose above 100°C. This means that the DSC experiment needs to be performed in hermetic pans in order to avoid a large endothermic evaporation peak that could hide other weak transitions.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
77
Figure 30.
The DSC data for this material is shown in Figure 31 which is a comparison of three separate heating experiments at 10°C/min on the same sample. The first heat in the sealed hermetic pan does not show any transitions until just below 100°C where the sample is known to decompose from the TGA data of Figure 30. The endotherm between 100 and 150°C could easily be misinterpreted as a melt if it were not for the TGA data. After the first heat to 175°C, the sample was cooled and heated a second time in the sealed hermetic pan (volatiles not lost). This run shows only a glass transition just above 0°C. The drug was crystalline to begin with (no Tg) but has converted to an amorphous form as the result of the decomposition that occurred starting at 100°C on the first heat. When evaluating storage stability of this formulation, the researcher should look for the formation of a glass transition over time. One question that still needs to be answered is the actual temperature of the glass transition.
78
LEONARD C. THOMAS
Figure 31.
As discussed earlier, the temperature of a glass transition decreases with increasing amounts of moisture or solvents. The TGA results show that the sample probably contains slightly more than one percent water. To determine the maximum glass transition of a dry sample, a pinhole was placed in the lid of the hermetic pan at the end of the second run and the sample dried at 150°C in the DSC cell for 30 minutes. After drying, the sample was heated a third time and the glass transition is seen starting at 50°C and ending near 100°C. Therefore, the analytical chemist that is evaluating the storage stability of the formulation would look for the development of a glass transition between 0 and 100°C as the sample ages. The wide temperature range is due to possible variations in moisture from batch to batch. 8. Protein Denaturation
Protein denaturation is a general term used to describe a change in structure of a protein. This change usually results in a nonreversible unfolding of the protein that affects its shape and, therefore, its biological activity. There are a variety of techniques used to measure protein denaturation including changes in physical properties (solubility) and changes in reactivity such as with enzymatic proteins.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
79
With DSC, protein denaturation is the measurement of the thermal stability of a protein in solution. A low-energy endothermic peak is observed over a temperature range. The temperature of the peak provides information about thermal stability and the area of the peak is a quantitative measure of the energy absorbed by the protein in order to change structure. Experimental conditions such as heating rate, pH, ionic strength (salt concentration) and even protein concentration can affect results. In order to minimize the opportunity for aggregation of the protein, most measurements are performed at relatively low concentration (1% or 10mg/ml of solution). DSC data for the denaturation of albumin from chicken eggs is shown in Figure 32 which is a comparison of 1 and 10% concentrations. At 1% concentration, a peak of about 40μW in height and 0.21 J/g in area is easily detected. The sharpness of the minor detail associated with peak is, however, less clear because of low signal strength. At 10% concentration, the peak is about 400μW in size and there is excellent detail showing minor shoulders on both the low and high temperature sides of the peak. The fact that peak temperatures differ by only 0.02°C provides confidence that minimal aggregation occurs at 10% concentration.
Figura 32.
There is great flexibility in analyzing data. In Figure 33, the 10% concentration data is shown with a sigmoidal baseline and the percent denaturation is plotted as a function of temperature. A plot of time during the experiment shows that the entire
80
LEONARD C. THOMAS
experiment took only 60 minutes which is much faster than typically obtained with microcalorimeters or solution calorimeters.
Figure 33.
9. Freeze-Drying
Freeze-drying, or lyophilization, has become a standard process in the pharmaceutical industry for the manufacture of biologically active substances. However, it is not without limitations due to its high cost in capital and energy, long processing time and difficulty in selecting manufacturing conditions of time, temperature, vacuum and component concentration. All of these parameters must be optimized in order to achieve a final product with the desirable characteristics of: • • • •
Full activity of the protein or drug Easy reconstruction Acceptable appearance of freeze-dried cake Good storage stability
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
81
The process of freeze-drying relies on the vapor pressure of ice. Even at temperatures as low as –50°, ice sublimes and leaves a very porous, low density cake containing the stabilized drug. Since the sublimation rate (drying rate) is very temperature dependent, use of the highest possible temperature during primary drying provides maximum drying efficiency and lowest process cost. In order to select the optimum drying temperature, it is necessary to understand the physical characteristics of the components used in the formulation that is to be freeze-dried. In decreasing order of mass, these are typically water, bulking agents, buffers or stabilizes and finally the drug itself. The bulking agent, which can be either crystalline or amorphous, and its interaction with frozen and unfrozen water in the frozen solution, define the physical structure which is essential to successful freezedrying. This structure manifests itself in the form of transitions that occur at specific temperatures. Physical properties of the bulking agent, such as modulus or viscosity, can change by orders of magnitude depending on whether the process temperature is a few degrees above or below the transition temperature. Therefore, knowledge of this structure and how it changes with time and temperature is required for successful drying. DSC has been used with only modest success in the characterization of frozen solutions used for freeze-drying. The reason is that there are numerous components and transitions happening within a narrow temperature range and DSC can only measure the sum of these. In addition, DSC must use relatively high heating rates (10 - 20°C) in order to optimize sensitivity while the best resolution is obtained at low heating rates of 0.5 to 1°C/min. In the introduction section on new DSC technologies, it was explained how Modulated DSC® has both an average and modulated heating rate. The combination of two heating rates allows the operator to select a slow average heating rate in order to obtain good resolution and a higher modulated heating rate to obtain increased sensitivity during the same experiment. In addition, the resulting modulated heat flow can be separated into the heat capacity and kinetic components of the total heat flow in order to improve ease of data interpretation. Figure 34 shows the raw modulated heating rate and modulated heat flow signals from an MDSC® experiment performed at an average heating rate of 0.5°C/min on a frozen solution of 40% sucrose in water. The change in heating rate (modulated) is approximately 3°C/min which causes the modulation in the heat flow signal.
82
LEONARD C. THOMAS
Figure 34.
Figure 35 shows the calculated signals from this MDSC® experiment. The Total signal is very difficult to interpret because it is equivalent to standard DSC at the same heating rate and contains two different transitions. The first of these is the important glass transition seen in the Reversing signal between –43.6 and –39.4°C. The second transition is an exotherm in the Nonreversing signal caused by crystallization of free water that could not crystallize during the quench cooling of the sample. This peak shows a maximum at about –36°C and a heat of crystallization of 5.7 J/g. It is not surprising that unfrozen water would begin to crystallize near –42°C since this is near the onset of the glass transition where a significant increase in molecular mobility and diffusion can occur.
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
83
Figure 35.
Figure 36 shows MDSC® data obtained on the same sample as above except that it was cooled at 0.5°C/min as compared to the quench cooling that was used in the previous run. The cooling and heating data are plotted in heat capacity units so that they can be visually compared. Notice that the slow cooling produces a more complex structure which has two step changes in heat capacity. The derivative signal more clearly shows the double transition and shows that there are very minor temperature lags between heating and cooling with the Tzero DSC™ technology. Even at the slow average heating rate of 0.5°C/min, MDSC® provides extremely high sensitivity for characterizing the complex structure of frozen solutions used for freeze-drying.
84
LEONARD C. THOMAS
Figure 36.
10. Miscellaneous
This paper has focused primarily on use of DSC and MDSC® for characterizing pharmaceutical materials. However, no analytical laboratory would be complete without several other thermal analysis instruments that provide complementary information to DSC and MDSC®. These include: Thermogravimetric Analysis (TGA): Weight Changes • • •
Moisture content Solvate/hydrate content Decomposition analysis − Can be combined with FTIR and GCMS
Thermomechanical Analysis (TMA): Dimensional Changes • •
Coefficient of thermal expansion Dimensional stability of fibers and films
CHARACTERIZATION OF PHARMACEUTICAL MATERIALS BY THERMAL ANALYSIS
Dynamic Mechanical Analysis (DMA): Viscoelastic Properties of Solids • •
Modulus of coatings and packaging materials Branching of polymers
Rheology: Viscoelastic Properties of Fluids • • •
Application of topical ointments Stability of suspensions and dispersions Viscosity of fluids
85
This page intentionally left blank
The application of thermal analysis in the study of metallic materials Ángel Varela and Ana García Esculea Politécnica Superior, Universidade da Coruña Mendizábal s/n, 15403 Ferrol. Spain.
[email protected] 1. Introduction The first commercial TGA equipment appeared in 1945 and was followed by DTA in 1960 and DSC in 1964. The Journal of Thermal Analysis appeared for the first time in 1969 and Thermochimica Acta came out the following year. In the seventies, when thermal analysis methods started to be generally applied, major advancements in scientific research related to metallic materials’ behaviour had already occurred. This was especially true in the case of alloys with greater industrial demand, such as steel, or alloys with greater future expectations, such as light alloys and mainly aluminum alloys. In the first half of the past century, famous metallurguists such as Bain, Davenport, Grossmann, and Jominy, amongst others, carried out a very important task. These scientists, along with institutions like the A.S.M. (American Society for Metals) and the IRSID in France, led to the creation of the Atlas of T.T.T. and C.C.T. curves of commercial steels, applying techniques like dilatometry, which measures the dimensional variation of the tested sample according to the temperature, and metallography, which freezes the structure after an isothermic period of a different duration at various temperatures. All this led to thermal analysis methods not initially being applied to metallic materials, but to other materials that were being discovered and were practically unknown in their properties and characteristics. This was due to the limitations of the methods used, especially with respect to the range of working temperatures, whose application to metallic materials was not very useful at that time. Nevertheless, metallic materials, upon conditioning their microstructure not only by temperature, but also by conditions of the process (speed of heating and cooling), could be treated with these analysis methods. As a result, fusion temperatures and the latent heat of fusion, allotropic transformations, transformations with a phase change in solid state in which diffusive phenomena can intervene, equilibrium diagrams, transformation kinetics, changes in magnetic behaviour, behaviour with respect to oxidation at high temperatures, the coefficient variation of thermal expansion, the variation of specific heat, and thermal stability could be determined using these methods. In general, any process that could be activated thermally can be studied with DSC. In addition, it can characterise transformation kinetics. Similarly, all those processes involving a modification in the mechanical properties can be studied using DMA or TMA. Examples of these phenomena include the precipitation processes of intermetallic compounds in some alloys that involve amplification in the elastic modulus and phase transformations. Some specific applications of thermal analysis methods on metallic materials are pointed out below.
88
ANGEL VARELA AND ANA GARCÍA
2. Shape memory materials
H eatFlow ( [mW ]
)
There are metallic materials known as shape memory materials because they present the special feature of undergoing a transformation in the solid state known as martensitic. It generally occurs at low temperatures and diffusive phenomena do not intervene. When these materials are plastically deformed at a temperature below the final temperature of martensitic transformation, they can later recuperate the original shape by heating the material above the final temperature of the reversible transformation. Due to their shape memory effect, their superelastic distribution, their absorption capacity, their change of resistance, and their good mechanical properties, shape memory materials tend to be used as fire detectors, muscles in robotics, mechanical sensors (for example, for the opening of windows for ventilation), and dental prosthesis, amongst other uses. In the past years, different works have appeared in which the application of the DSC method is used for the determination of characteristic temperatures and of enthalpy changes associated with endothermic and exothermic processes that occur during the controlled heating and cooling of these types of materials [1-7].
Temp [°C ]
Figure 1. DSC curve showing the martensitic transition reversibility in a shape memory alloy
89
APPLICATIONS OF THERMAL ANALYSIS TO THE STUDY OF METALS
8x10
10
7x10
110 ºC
10
100 ºC 6x10
E' ( ) [Pa]
5x10
10
90 ºC 10
80 ºC 4x10
10
70 ºC 3x10
10
60 ºC 2x10
10
1x10
10
50 ºC 40 ºC 30 ºC
0.0 10
-2
10
-1
10
0
10
1
10
Frequency [Hz]
Figure 2- Determination of the martensitic transition temperature with DMA. According to reference [15]
The transition temperature for this class of alloys can also be determined employing DMA methods. In Figure 2, the elastic modulus obtained for an equiatomic alloy of nickel and titanium according to the frequency of nine different temperatures is shown. The martensitic transition temperature can be determined observing the separation between curves which increase around that temperature. In addition, it is possible to observe how the curve for the transition temperature presents a greater slope than the rest, which indicates the modulus’s dependence on the frequency is increased for the martensitic transition temperature. A third way to study martensitic transformation temperatures of shape memory alloys is through the use of TMA. In Figure 3, the deformation that is tested on a wire of the same alloy is shown. In the previous example, it was deformed according to time when the temperature varied and when it was subjected to a constant force.
2
90
ANGEL VARELA AND ANA GARCÍA
7.0
130.0 strain Constant stress of 2.3 e08 Pa 120.0 Constant stress of 8.0 e06 Pa
6.0
110.0 5.0 100.0 4.0
)
80.0
2.0
70.0 60.0
1.0
)
strain ( [% ]
3.0
Temp ( [°C]
90.0
50.0 0.0 40.0 -1.0
30.0
-2.0 0.0
20.0 500.0
1000.0
1500.0
2000.0
2500.0
3000.0
time [s]
Figure 3. Determination of the martensitic transformation temperature with TMA. According to reference [15]
In Figure 3 a sharp increase in the elastic modulus’s value is observed when the transformation temperature is reached. As this transformation temperature shifts towards higher values, the applied force increases from 8.0 MPa to 2300 MPa.
3. Determination of the progress of metal and alloy solidification. If the alloy’s solidification is studied using the DSC curve, as in the example of an alloy of lead-tin, and if the solidification’s interval is centred, the evolution of the liquid phase percentage according to the temperature can be traced. [8-9].
91
APPLICATIONS OF THERMAL ANALYSIS TO THE STUDY OF METALS
2 0
HeatFlow (mW)
-2 -4 -6 -8 -10 -12 -14 0
50
100
150
200
250
300
350
Temperature (ºC)
2
100% 100% Liquid
0 -2 -4
60%
-6 40%
-8 -10
20%
Melting progress (------)
HeatFlow (mW) (-------)
80%
100 % Solid -12
0% 150
170
190
210
230
-14 250
Temperature (ºC)
Figure 4. DSC curve, endothermic fusion and variation of the liquid phase percentage determined using the Borchardt and Daniels method [16] 4. Determination of equilibrium diagrams. If we obtain the curves for different alloys of binary [10] or ternary [11] systems with DSC, the liquidus and solidus temperatures as well as those temperatures corresponding to transformations in solid states can be determined. These curves are created from all the information obtained in the analysis of the same temperatures with respect to the appropriate equilibrium diagrams. In the following example, we notice how the diagram structure of the appropriate phase corresponding to a lead-tin system is carried out step by step. From
92
ANGEL VARELA AND ANA GARCÍA
the DSC curves obtained during the solidification, the appropriate temperatures at the beginning of solidification (point of the liquidus line) and the eutectic transformation’s temperature for alloys with different proportions of lead and tin are determined.
25.0
20.0 )
Eutectic transformation
HeatFlow ( [mW ]
15.0
10.0 Solidification beginnig 5.0
0.0 100.0
150.0
200.0
250.0
300.0
Temp [°C]
Figure 5. DSC curve obtained during the solidification of an alloy with 30% tin and 70% lead If points for the different alloys of lead and tin are determined, it is possible to begin the construction of the phase diagram, as can be seen in Figure 6. In this figure, the alloy’s composition with respect to the temperature is shown. We are able to see each one of the DCS curves obtained for the different alloys with their liquidus and solidus lines’ appropriate points. We also can see the points of fusion of the two metals in pure state and the way of tracing the lines of the phase diagram.
93
APPLICATIONS OF THERMAL ANALYSIS TO THE STUDY OF METALS
100.0
80.0
0 % Sn 30 % Sn 50 % Sn 61.9 % Sn 70 % Sn 90 % Sn 100 % Sn (inverted)
60.0 % Sn 40.0
20.0
0.0 100.0
150.0
200.0
250.0
300.0
350.0
Temperature (ºC)
Figure 6. Pb-Sn phases diagram construction In order to establish the point of maximum solubility and the eutectic alloy’s composition, should it not be known, the Tamman triangle, which shows the appropriate peak’s area for the eutectic transformation, is constructed. For those alloys that are tested on the basis of the alloy’s composition, see Figure 7. Eutetic transformation peak area
Eutetic composition 61.9
Solubility maximun of Sn in Pb 18.3
0
20
Solubility maximun of Pb in Sn 18.3 40
60
80
100
% Sn
Figure 7. Determination of the points of maximum solubility and of the eutectic composition for the Pb-Sn system
94
ANGEL VARELA AND ANA GARCÍA
5. Measurement of heat capacity. Using DSC, the variation of the heat flow with the temperature of the tested sample is obtained. The results are compared with the base line of synthetic sapphire that is taken as normal specific heat and allows the apparatus’s software to determine the specific heat’s variation with the temperature [12-13].
0.8
Y = 4E-17x6-2E-13x5+4E-10x4-5E-07x3+0.0003x2-0.0990x+12.125 R2= 0.9992
Cp (J/gK)
0.6
0.4
0.2 400
600
800
1000
1200
1400
Temperature (K) Figure 8. Variation of Cp with the temperature in a nickel-based superalloy. According to reference [13-14]
6. Transition temperatures: magnetic change or Curie’s temperature, allotropic transformations, fusion temperature, transformation with phase change in solid state. Using DSC, all these temperatures as well as the energy associated with the appropriate transformations can be detected.
95
APPLICATIONS OF THERMAL ANALYSIS TO THE STUDY OF METALS
200.0
150.0 Gamma-Delta transition 1393.0 °C
)
100.0
Curie Temperature 771.85 °C
HeatFlow ( [mW ]
50.0
0.0
-50.0 Alpha-Gamma transition 911.26 °C
-100.0
-150.0 700.0
800.0
900.0
11 00.0
1300.0
1500.0
Temp [°C]
Figure 9. Allotropic transformations of pure Fe
5.0 156.2 °C
0.0
)
-5.0
HeatFlow ( [mW ]
-10.0
-15.0
-20.0
-25.0
-30.0 100.0
120.0
140.0
160.0
180.0
200.0
Temp [°C]
Figure 10. Determination of the fusion temperature of pure indium
96
ANGEL VARELA AND ANA GARCÍA
In addition, by means of TMA, it can be suggested that dimensional change, either contraction or expansion, is associated with crystalline structure changes in transformations with phase changes in the solid state.
D imens ional C hange ( [(mm)]
)
Pearlite to austenite
Austenite to Martensite
Temperature [°C ]
Fig. 11. Dimensional changes that are produced in steel subjected to heating and then to cooling
7. Behaviour with respect to oxidation at high temperatures Using TGA and regulating the nature and atmospheric pressure in the thermobalance’s interior, the material’s behaviour with respect to the time periods at various temperatures can be measured.
97
APPLICATIONS OF THERMAL ANALYSIS TO THE STUDY OF METALS
50.0
1500.0
40.0
)
30.0
22.273 % 1500 min
3.841 % 625 min
1300.0
20.0
Temp ( [°C]
10.0
W eight increment ( [% ]
1400.0
1200.0
0.0
1100.0
-10.0
)
-20.0
1000.0
-30.0 900.0 -40.0 -50.0 0.0
800.0 400.0
800.0
1200.0
1600.0
time [min]
Fig. 12. Oxidation at high temperatures of stainless steel in the air’s atmosphere
References 1.
2.
3. 4.
5.
6.
G. Airoldi, G. Riva, B. Rivolta, M. Vanelli. “DSC calibration in the study of shape memory alloys”. Journal of Thermal Analysis, Vol. 42, pag. 781-791. (1994) H.C. Lin, K.M. Lin. “An investigation of martensitic transformation in an Fe30Mn-6Si shape memory alloy”. Scripta Materialia, Vol. 34, nº3, pag. 343-347. (1996) E. Hornbogen, V. Mertinger, D. Wurzel. “Microstructure and tensile properties of two binary NiTi-alloys”. Scripta Materialia, Vol. 44, pag. 171-178. (2001) H. Xu, S. Tan. “Calorimetric investigation of a Cu-Zn-Al alloy with two way shape memory”. Scripta Metallurgica et Materialia, Vol. 33, nº 5, pag. 749-754. (1995) D.Chrobak, H. Morawiec. “Thermodynamic analysis of the martensitic transformation in plastically deformed NiTi alloy”. Scripta Materialia, Vol. 44, pag. 725-730. (2001) A. Rotini, A. Biscarini, R. Campanella, B. Coluzzi, G. Mazzolai. “Martensitic transition in Ni40Ti50Cu10 alloy containing hydrogen: calorimetric (DSC) and mechanical spectroscopy experiments” Scripta Materialia, Vol. 44, pag. 719-724. (2001)
98
7.
8.
9.
10. 11.
12.
13. 14.
15.
16.
ANGEL VARELA AND ANA GARCÍA
B.Y. Li, L.J. Rong, Y.Y. Li. “Electric resistance phenomena in porous Ni-Ti shape memory alloys produced by SHS”. Scripta Materialia, Vol. 44, pag. 823827. (2001) B. Cantor. “Differential scanning calorimetry and the advanced solidification processing of metals and alloys”. Journal of Thermal Analysis, Vol. 42, pag. 647665. (1994) A. Jardy, S. Hans, D. Ablitzer. “Determination des températures de liquidus et de solidus d’alliages de titane par analyse thermique defférentielle”. Reveu de Metallurgie, pag. 1021-1028. (2000) G. Hakvoort, T.E. Hakvoort. “A practical thermal analysis course”. Journal of Thermal Analysis, Vol. 49, pag. 1715-1723. (1997) A. Sabbar, A. Zrineh, M. Gambino, J.P. Bros. “Contribution à l´´etude du diagramme d´´equilibre des phases du système ternarie indium-etain-zinc”. Thermochimica Acta, 369, pag. 125-126. (2001) L. Perring, J.J: Kuntz, F. Bussy, J.C. Gachon. “Heat capacity measurements by differential scanning calorimetry in the Pd-Pb-, Pd-Sn and Pd-In systems”. Thermochimica Acta, 366, pag. 31-36. (2001) J.H. Suwardie, R. Artiaga, J.L. Mier. “Thermal characterization of Ni-based super-alloy”. NATAS, pag. 75-79. (2000) A. Varela, R. Artiaga, F. Barbadillo, J.L. Mier, J.H. Suwardie. “Estudio de la capacidad calorífica de una superaleación de base níquel”. Cadernos do Laboratorio Xeolóxico de Laxe, nº25, pag. 23-26. (2000) R. Artiaga, A. García, L. García, A. Varela, J.L: Mier, S. Naya, M. Graña. “DMTA study of a nickel-titanium wire”. Journal of thermal analysis and calorimetry, vol. 70, pp. 199-207. (2002) H. J. Borchardt, F. Daniels. “The application of differential thermal analysis to the study of reaction kinetics”. Application of differential thermal analysis, vol. 79, pp. 41-46. (1957)
Thermal analysis of inorganic materials José Luis Mier Buenhombre Escuela Politécnica Superior da Coruña Mendizábal s/n, 15403 Ferrol, Spain
[email protected]
1. Introduction Thermogravimetry (TG) studies the change (gain or loss) of a sample mass as a function of temperature and/or time. The measurements of these changes are made using a thermobalance in which the tests are accomplished according to a programed heating rate in a suitable enclosed system with a controlled atmosphere. The application of thermogravimetry to inorganic gravimetric analysis caused a real revolution in the early 1950s. Today, thermogravimetry resolves many analytical problems in inorganic chemistry, ceramics, metallurgy, pigment development, mineralogy and geochemistry. Application of thermogravimetry is limited to events with detectable mass changes Otherwise, other techniques, such as differential thermal analysis (DTA) or differential scanning calorimetry (DSC), must be used. The main inorganic thermal events recorded by TG are summarized in table 1
Table 1. Main thermal events registered by TG in inorganic materials Sublimation
A (solid) → A (gas)
Vaporization
A (liquid) → A (gas)
Adsorption
A (solid) + B (gas) → A (solid) (Bgas-ads)
Absorption
A (solid) + B (gas) → A (solid) (Bgas-abs)
Desorption
A (solid) (Bgas-ads) → A (solid) + B (gas) A (solid) (Bgas-abs) → A (solid) + B (gas)
Oxidation
A (solid) + B (gas) → C (solid)
Pyrolisis
A (solid) → B (solid) + Gases
Volatilization
A (solid) + B (gas) → Gases
Heterogeneous catalysis A (solid) + (Gases)1 → A (solid) + (Gases)2
100
JOSÉ L. MIER
There is an absorption (endothermic process) or a emission (exothermic process) of heat when a material has a change of physical state or chemical reaction. Differential thermal analysis (DTA) measures the difference of temperature between a sample and a reference (ǻT) versus temperature, whereas, differential scanning calorimetry (DSC) records the differences of heat quantity between a sample and a reference versus temperature. In both techniques a programmed heating rate is applied. DSC gives a value for the amount of absorbed or evolved energy in a particular transition and, therefore, also provides a direct calorimetric measurement. Applications of DTA and DSC to inorganic samples are: • • • • •
Determination of enthalpy in phase changes Determination of phase diagrams Determination of enthalpy in chemical reaction Kinetic analysis Identification and characterization
2. Quantitative chemical analyisis 2.1. Determination of alkaline-earth elements in dissolution The quantitative analysis of Ca2+, Sr2+ and Ba2+ in aqueous solution is possible by TGA (1). The separation of these ions is carried out with ammonium oxalate to give mixed metal oxalate hydrates which are decomposed on the thermobalance (figure 1). The following steps are observed in the TG and DTG plots: • Loss of hydratation water (step A) • Decomposition of the three anhydrous metal oxalates to metal carbonates and CO (step B): o CaC2O2 → CaCO3 + CO o SrC2O2 → SrCO3 + CO o BaC2O2 → BaCO3 + CO • Calcium carbonate decomposition CaCO3 → CaO + CO2 (step C) • Strontium carbonate decomposition SrCO3 → SrO + CO2 (step D) • Barium carbonate decomposition BaCO3 → BaO + CO2 (step E) The amounts of calcium, strontium and barium can be calculated according to the following equations:
Ca =
Atomic mass (Ca) . Mass loss (C ) Molecular mass (CO2 )
Sr =
Atomic mass (Sr) . Mass loss (D) Molecular mass (CO2 )
Ba =
Atomic mass (Ba) . Mass loss (M) Molecular mass (CO2 )
101
THERMAL ANALYSIS OF INORGANIC MATERIALS
120 100
A B
% Weight
80 60
C
40
D 20 0 0
200
400
600
800
1000
1200
Tempe rature (ºC )
Figure 1. TGA curves of calcium, strontium and barium oxalates hydrates
2.2. Calcium and magnesium analysis in dolomite Dolomite is a double carbonate of magnesium and calcium containing 30.41% of calcium oxide (CaO), 21.86% of magnesium oxide (MgO) and 4.73% of carbon dioxide (CO2). As an ore, it facilitates the process of obtaining magnesium. It is used as a building and ornamentation material, and in the manufacture of certain elements. As a raw material it is employed to obtain magnesia [(OH)2Mg], itself used in iron and steel refractory coatings, and as flux material in the metallurgical industry. Figure 2 shows the TG plot for dolomite. There is loss of water up to 200ºC. Magnesium carbonate decomposition appears at 470ºC (MgCO3) MgCO3 → MgO + CO2 DE corresponds to a mixture of MgO and CaCO3 Between 600 and 850ºC is the calcium carbonate decomposition CaCO3 → CaO + CO2 FG corresponds to a mixture of MgO and CaO The difference, W1-W2 is equal to the mass of carbon dioxide that develops between 500 and 900ºC by the decomposition of calcium carbonate. The amount of calcium oxide is given by: W(CaO) = (W1-W2).56/44 = (W1-W2)⋅ 1.272 Where 56 is the CaO molecular weight and 44 the CO2 molecular weight, and the difference (W1-W2) is the CO2 mass evolved between 500 and 900ºC.
102
JOSÉ L. MIER
The amount of magnesium oxide is given by the difference W(MgO) = W2 – W (CaO)
30 25
E
Mass (mg)
20 15
W1
F
10 W2
5 0 500
600
700
800
900
1000
Temperature (ºC)
Figure 2. TGA curve of dolomite 3. Clays 3.1. Kaolinite Kaolinite´s formula is Al2Si2O5(OH)4. Humans have been using this material in different ways from time immemorial. In the fifteen century, porcelain made of ceramics with a high content of kaolin acquired great fame among the nobility. Nowadays, the main consumer of kaolin is the paper industry, which uses more than 50% of the production as filler, and to give a superficial finish, or stucco, to paper. Also, the manufacturing of ceramic materials (porcelain, stoneware, crockery, sanitary pottery and electroceramics) and refractory (thermal insulators and cements) are important. Kaolinite is found as a secondary mineral formed by the weathering or hydrothermal alteration of aluminum silicates, particulary feldspars. It occurs naturally in almost every country of the world. Figure 3 shows the TG curve of kaolinite. Absorbed water is gradually evolved at temperatures up to 200ºC (in this case the mass-loss is 0.8% of the sample). The dehydroxylation reaction occurs in the temperature range of 400-700ºC giving a massloss of 13.4%. Experimental factors, such as the heating rate, large sample particle size and so on can shift the initial and final temperatures of the dehydroxylation.
103
THERMAL ANALYSIS OF INORGANIC MATERIALS
21 DEHYDROXYLATION 13.4% WEIGHT LOSS
Weight (mg)
20 19
18 17 16 0
200
400
600
800
1000
Te mpe rature (ºC)
Figure 3. TGA curve of kaolinite
A typical DSC plot of kaolinite is shown in Figure 4. The following peaks can be observed in it: •
Desoprtion of water from ambient to 110ºC (Endotherm); seldom observed by DTA and DSC except in highly disordered species. Easily observed by TG and DTG. • Dehydroxilation of crystal lattice (450-700ºC) (Endotherm process with a Tmin=540ºC). The main endothermic peak. Observed in all members of the group except allophane. • Spinel-type structure cristalization (900-1000ºC) (Exotherm process with a Tmax=990ºC). • Formation of mullite (1100ºC up) (Exotherm process).
8
T 2 =990º C
6
Heat flow (mW)
4 2
T 3 =1140º
0 -2 -4
T 1 =540º -6 0
500
1000
Te mpe rature (ºC)
Figure 4. DSC curve of kaolinite
1500
104
JOSÉ L. MIER
3.2. Hectorite Hectorite [Na0.3(Mg,Li)3Si4O10(F,OH)2] is a clay mineral, with a similar structure to that of bentonite. It belongs to the smectites group. It has a soft greasy texture and feels like modeling clay when squeezed between the fingers. It is one of the more expensive clays, due to its unique thixotropic properties. The main uses of hectorite are cosmetics (lotions, soaps, creams and shampoos), coatings and inks. It is also employed as molding sand in metal casting and as filler in the paper industry. It has a great capacity to absorb and to adsorb because of its high specific surface. It plays an important part in industrial water purification and the discoloration of oil, wine, cider and beer. Hectorite geological samples are usually associated with large amounts of calcite, in some cases with varying amounts of dolomite. Thus, most published thermal analysis curves reflect the thermal behavior not only of hectorite but also of carbonates. Figure 5 shows a typical DTA curve of hectorite. At an endothermic peak at ΔTmin of 119ºC is caused by the interlayer water loss, whereas the 742 and 838ºC peaks are due to the dehydroxylation/carbonate decomposition reactions. A narrow exothermic peak at 1110ºC is followed by endothermic peaks at 1135 and 1255ºC. The latter are probably due to the formation of clinoenstatite.
30 T4=1110ºC
H eat flow (m W )
25 20
T6=1164ºC
15 T1=119ºC
10
T5=1135ºC
T2=742ºC
T7=1255ºC
5 T3=838ºC
0 0
200
400
600
800
1000
Temperature (ºC) Figure 5. DSC curve of hectorite
1200
1400
THERMAL ANALYSIS OF INORGANIC MATERIALS
105
4. Concrete and mortars Concrete is a mixture of cement clinker, water, gypsum (CaSO4.2H2O), and aggregates such as quartz, limestone, dolomite, and slag. Clinker is produced by the reaction of calcium oxide (CaO=C), silica (SiO2=S), alumina (Al2O3=A) and ferric oxide (Fe2O3=F) at about 1500ºC to give tricalcium silicate (C3S), dicalcium silicate (C2S), tricalcium aluminate (C3A) and ferrite solid solution of composition between C2F and C6A2F often represented as C4AF. The hydratation and hardening of Portland cement takes place as a result of the following reactions: 2 C3S + 6 H2O → C3S2.3H2O + 3 Ca(OH)2 2 C2S + 4 H2O → C3S2.3H2O + Ca(OH)2. The tricalcium silicate trihydrated derived from this reaction has extremely small particles and it forms a colloidal suspension. Calcium hydroxide (portlandite) is a crystalline solid. The hydration products of other cement components are not generally described as producing portlandite. If thermogravimetric analyisis is carried out in carbon dioxide at atmospheric pressure, the first event will be dexhydroxylation at 400ºC of any portlandite present in recently made concrete Ca(OH)2 (s) → CaO (s) + H2O (g) But the amount of Ca(OH)2 is very small amount old concrete because of the reaction of portlandite with carbon dioxide over many years: Ca(OH)2 (s) + CO2 (g) → CaCO3 (s) + H2O (l) Dollimore et al. (2) showed the importance of this last reaction in the thermal analysis of dolomite used in recycled portland cement concrete (RPCC) as an aggregate. Dolomite in N2 decomposes in a single step: CaMg(CO3)2 (s) → CaO (s) + MgO (s) + 2 CO2 (g) but in the presence of CO2, figure 6, ithe dolomite dissociation is divided into two steps: CaMg(CO3)2 (s) → MgO (s) + CaCO3 (s) + CO2 (g) at 780ºC CaCO3 → CaO (s) + CO2 (g) at 910ºC The first dissociation permits the dolomite weight percentage to be calculated. In the second step, part of the CaCO3 comes from portlandite carbonation.
106
JOSÉ L. MIER
Figure 6. TG and DTG for recycled cement concrete (RPCC) carried out in a flowing atmosphere of CO2 (100 ml/min) at a heating rate of 10ºC/min (5) (With permission of Elsevier). On the other hand, the DTA curves of ancient concretes show the existence of an endothermic peak at 570ºC (figure 7). This peak is related to the allotropic transformation of quartz (3): Į-SiO2 ń ȕ-SiO2
Figure 7. DTA and TG curves from a Pamplona cathedral mortar (6) (With permission of Elsevier). Another component of concrete and mortars is gypsum. Natural gypsum formula is CaSO4.2H2O). If two molecules of water are removed, anhydrite (CaSO4) is produced. There are two anhydrite forms, one which hydrates with water (soluble anhydrite), and the other which shows no tendency to react with water (insoluble anhydrite). Figure 8 shows the differential thermal analysis of a CaSO4.2H2O sample heated at 20ºC/min (4). Crystallization water was partially removed starting from 123ºC to produce the hemihydrate form (CaSO4⋅0.5H2O) also called bassanite or plaster of Paris. The second peak at 202ºC is due to the loss of 0.5H2O to form soluble anhydrite (CaSO4). The exothermic peak between 353 and 375ºC represents the phase change to insoluble anhydrite
THERMAL ANALYSIS OF INORGANIC MATERIALS
107
Figure 8. DTA of calcium sulfate dihydrate (7) (With permission of Elsevier)
5. Pigments 5.1. Egyptian blue Egyptian blue, CaCu(Si4O10), is a very stable pigment which can be found in many works of art from the Egyptian, Mesopotamian, and Roman civilizations. This compound can be synthesized in a thermobalance heating a mixture of quartz (SiO2), cupric oxide (CuO), calcite (CaCO3), and a fluxing agent (Na2CO3, borax or PbO). Without these fluxing agents, the reaction proceeds very slowly, leading to an impure product which does not have the intense blue colour of the pigment. With borax, for instance, the reaction mixture forms CaCu(Si4O10) at about 900ºC at a heating rate of 4ºC/min and remains stable in an oxidizing atmosphere to about 1080ºC. Above this temperature, it decomposes to give off trydimite and a mixture of CuO and Cu2O due to the reduction of Cu2+ to Cu+ (figure 9). However the initial compound does not form again on cooling even though Cu+ deoxidizes to Cu2+ (5-6). The thermal stability of the isoestructural compounds SrCu(Si4O10) and BaCu(Si4O10) is greater then in the calcium compound, since they decompose at 1155 and 1170º respectively. Single crystals of the Ca, Sr, Ba compounds can be grown by using borax, PbO, or Na2CO3 flux with heating cycles of 30 hours at about 900ºC. These crystals are similar to some Egyptian blue samples obtained from archaeological excavations.
108
JOSÉ L. MIER
Figure 9. TG-DTA-T curves showing the formation of Egyptian blue from calcite-CuO-quartz mixture (9) (With permission of American Chemical Society
6. Extractive metallurgy 6.1. Thermal behavior of AlF3.H2O Aluminum fluoride is used as flux in the electrolytic reduction of alumina (Al2O3) to produce metal aluminium. In the wet process, the anhydrous fluoride is prepared by heating trihydrated aluminum fluoride. This thermal decomposition involves three stages: AlF3.3H2O → AlF3.0,5H2O + 2,5 H2O de 108 a 277°C AlF3.0,5H2O → AlF3 + 0,5H2O de 277 a 550°C >380°C AlF3 + 3 H2O → Al2O3 + 6 HF Figure 10 shows Tg and DTG curves for AlF3.3H2O and the system AlF3.3H2O/MgO at a heating rate of 10°C/min (7).The first stage, with the temperature in the range from 100 to 277ºC and a mass loss of 32.7% is related to the loss of 2.5 molecules of water from AlF3.3H2O. The value of mass loss in the second stage (6.9%) corresponds to the formation of anhydrous aluminum fluoride. As temperature exceeds 380ºC, aluminum fluoride reacts with water to give alumina (Al2O3).
THERMAL ANALYSIS OF INORGANIC MATERIALS
109
Figure 10. TG-DTG curves of AlF3-3H2O and system AlF3.3H2O//MgO at heating rate of 110ºC/min (11) (With permission of Elsevier)
6.2. Thermal oxidation of covellite Covellite usually exists in small quantities associated with other sulphides as chalcocite (Cu2S), chalcopyrite (CuFeS2) and bornite (Cu5FeS4). Heating covellite in an oxidant atmosphere causes the formation of copper deficient compounds at low temperatures and the oxidation to sulphates and oxides at higher temperatures. Dunn and Muzenda (8) carried out TGA/DTA tests with covellite samples at 20ºC/min in dry air (figure 11). They analyzed the evolved gases by use of coupled FTIR equipment. The first stage is the decomposition of a small amount of covellite to give digenite (Cu1,8S) and the oxidation of covellite to produce copper (I) sulphide. These reactions give an exothermic peak in the DTA curve and a mass-loss in TGA curve between 330 and 422ºC 1,8 CuS + 0,8 O2 → Cu1,8S + 0,8 SO2 2 CuS + O2 → Cu2S + SO2 Between 422 and 474ºC there is a mass gain associated with an exothermic peak due to Cu2S oxidation to Cu2SO4 according to the global reaction: Cu2S + 4 O2 → 2 CuSO4 Another exothermic peak and associated mass gain appears in the temperature 474-585ºC. This event was related to a solid-solid reaction between Cu2S and CuSO4 to form Cu2O (exothermic peak) and sulfation of the oxide formed (mass gain). Cu2S + 2 CuSO4 → 2 Cu2O + 3 SO2 Cu2O + 2 SO2 + 1.5 O2 → 2 CuSO4
110
JOSÉ L. MIER
Also, the presence of CuO.CuSO4 was detected in the melt at 583ºC probably due to this proposed reactions: 2 CuSO4 → CuO.CuSO4 + SO2 + 0.5 O2 Cu2O + 4 CuSO4 → 3 CuO.CuSO4 + SO2 The formation of CuO.CuSO4 continued up to 653ºC, at which an endothermic peak and a mass-loss started. This last stage was related to the decomposition of CuO.CuSO4 to CuO.
Figure 11. TGA-DTA-FTIR records for the oxidation of covellite from ambient to 820ºC in dry air at heating rate of 20ºC/min (12) (With permission of Elsevier.
References 1. Erdey L, Paulik F, Svehla G and Liptay G. Anal. Chem., 182, 329 (1961). 2. Dollimore D, Gupta J.D, Lerdkanchanaporn S and Nippani S, Thermochim. Acta, 357-358, 31, (2000) 3. Alvarez J.L, Navarro I, and García-Casado P.J, Thermochim Acta, 365, 177 (2000) 4. Adams J, Kneller W and Dollimore D, Thermochim. Acta, 211, 93 (1992) 5. Wiedemann H.G and Bayer G, Chem. Tech, 381, (1977) 6. Bayer G and Widemann H.G, Sandoz Bull., 40, 19 (1976) 7. Delog X, Yongqin L, Ying L, Longbao Z and Wenkui G, Thermochim. Acta, 352353, 47, (2000) 8. Dunn J.G and Muzenda C, Thermochim. Acta, 369, 117, (2000)
Characterization of Coal by Thermal Analysis Methods Sen Li, Nathan Whitely, Weibing Xu and Wei-Ping Pan Thermal Analysis Laboratory, Materials Characterization Center, Western Kentucky University, Bowling Green, KY 42101. U.S.A.
[email protected] Thermogravimetric analysis (TGA) is an instrument with immense utility for analyzing numerous coal systems. Although TGA instrumentation technology limits the heating rate, TGA is very useful in making valid predictions of the chemical and physical properties of coal. TGA’s foremost advantages are the precision, speed, and ease that samples can be analyzed. One person can analyze small samples on the order of grams that would demand a larger staff and much more money to analyze in a largescale combustion system. Coal is a very heterogeneous material composed of both organic and inorganic substances. The organic contents called coal macerals are the desired portion of the coal. The inorganic contents called mineral matters are unwanted components that dilute the coal and provide a means for pollution that are undesirable. Coals can be classified by rank of the calorific value. Once the coals are ranked by the calorific value in BTU/lb., other qualities such as the volatiles content can be used to further divide the ranks into sub-categories. With increasing rank many important properties of the coal change. Specific qualities include increased size of hydrocarbon molecule, increased carbon content, increased calorific values, decreased water content, and decreased volatile content. Overall as the rank of coal increases, the quality and value of the coal also increases. The combustion of coal is generally the combination of the two processes. One is the pyrolysis or devolatilization of the coal due to an applied thermal stress. The second is the heterogeneous combustion of the remaining char according to carbonoxygen reactions. The ignition rate is very important when discussing the combustion of coal. High heating rates will cause simultaneous evolution and ignition of volatiles, whereas with low heating rates devolatilization will occur prior to ignition and combustion. The burning profile of coal can be an instrumental analysis in distinguishing between different coals. Four key characteristics of the DTG curve should be used when analyzing a burning profile. The initial temperature (IT) is the temperature at which pyrolysis is initiated. The fixed carbon initiation temperature (ITFC) is the temperature in which combustion of the coal begins. The IT region and the ITFC region overlap because releasing volatiles from the coal sample creates conditions encouraging combustion. The peak maximum temperature (PT) is simply the temperature at the peak of the DTG curve noting the temperature at which maximum weight loss occurs. The burnout temperature (BT) is the temperature at which the weight loss has ended and a baseline weight has once again been reached. Figure 1 shows each of the four previous discussed characteristics for a coal sample.
112
SEN LI, NATHAN WHITELY, WEIBING XU AND WEI-PING PAN
PT
ITFC BT
IT
Figure 1. Characteristic Points on DTG Curve of Coal Sample Coals of higher rank generally have a higher peak maximum temperature as shown by Figure 2.
Figure 2. DTG Curves of Coal Samples of Various Ranks This trend occurs because coals of higher rank contain less mineral matter effectively raising the calorific values. Figure 3 shows that the peak temperature follows a pattern as a function of carbon content.
CHARACTERIZATION OF COAL BY THERMAL ANALYSIS METHODS
113
Figure 3. Peak Temperature as a Function of Carbon Content For carbon contents in the range of 82-84%, the peak temperature would differ by a large amount. However, in the regions of 76-78% and 88-90% the peak temperature would vary by only a slight amount. The rate at peak temperature also follows a trend. Coals with 84-85% carbon content experience the highest rate at peak temperature; thus, burning more efficiently. Proximate analysis is the determination of the moisture, ash, and volatile matter. ASTM standard methods have been written for proximate analysis. Figure 4 shows how TGA can be used to perform the proximate analysis of a coal sample.
Figure 4. Proximate Analysis Using TGA
114
SEN LI, NATHAN WHITELY, WEIBING XU AND WEI-PING PAN
The furnace temperature is ramped to 110ºC and held isothermally. This ensures that any weight loss experienced is a direct effect of the moisture of the coal. The temperature is then ramped to 900ºC and held isothermally. Any weight loss occurring in this isotherm region is a direct result of the loss of volatiles. The previous two steps are performed in a nitrogen atmosphere. For the third part, the atmosphere is changed to oxygen. This creates an environment suitable for combustion. Once the coal is completely combusted, the residue is taken as the ash. Coal blends are used to make coal burning more environmentally considerate. Coals having high sulfur contents can be blended with low sulfur coal to decrease SO2 emissions, while retaining the efficiency. TGA is a very versatile instrument in assessing the feasibility of using coal blends. The linear additive rule can be used to estimate the theoretical composite value of a blend, but TGA must be used to estimate whether or not the properties of the blend are additive or not. A property is additive when the blend’s physical property can be predicted by the relative amounts of the component coals and their physical properties. The linear additive rule is a relationship defined by the properties of a coal in a blend and the amount of that coal in the blend. A series of coal blends were studied under isothermal and non-isothermal conditions in order to determine what physical properties of specific coal blends are additive or nonadditive [1]. Collectively the TG curves show that some TG parameters under nonisothermal combustion conditions are additive such as residue and weight loss while others such as peak temperature and maximum rate are not. For isothermal combustion the peak temperature and maximum rate are additive, while the residue and combustion end point temperature is not. TGA is utilized to such a great extent because TGA analysis of coal blends is fast, simple, and yields precise and accurate results. Chlorine in the form of HCl in coal has the potential to generate chlorinated hydrocarbons that can be released into the atmosphere that may be capable of causing corrosion [2]. TG-FTIR and TG-MS show good correlation to the temperatures where HCl is evolved. Figures 5 and 6 are the TG-FTIR and TG-MS plots, respectively.
Figure 5. TG-FTIR of Coal Blend
CHARACTERIZATION OF COAL BY THERMAL ANALYSIS METHODS
115
Figure 6. TG-MS of Coal Blend However, the TG-MS plot shows that the HCl evolution appears to occur in three regions. Figure 7 shows that the amount of HCl and SO2 released as determined by the integration of the FTIR curves correlate very well with the actual percentage of HCl and SO2 of the coal.
Figure 7. SO2 and HCl Emissions as a Function of Actual Sulfur and Chlorine Content Figure 7 shows the great precision and accuracy that TG-FTIR provides in determining the sulfur and chlorine content in coal. From Figure 8, it appears to show that HCl is evolved at lower temperatures for coals containing higher quantities of chlorine.
116
SEN LI, NATHAN WHITELY, WEIBING XU AND WEI-PING PAN
Figure 8. Maximum Temperature of HCl Release as a Function of Chlorine Content This trend occurs as HCl becomes less bound when present in higher concentrations. The temperature maximum of the second weight loss as determined by the DTG curve and the MS curve show good correlation. The intensity ratio is the ratio of the integration of the second peak to the integration of the first peak. The curve shows that British coal has a much higher intensity ratio than that of US coal, this supports that British coal is more corrosive than US coal. The particle size of the sample affects the temperature at which the HCl is evolved. Figure 9 shows that the two peaks shown in the TG/MS curve are definitely the result of HCl existing in two different forms within the coal.
Figure 9. TG-MS of Coal
CHARACTERIZATION OF COAL BY THERMAL ANALYSIS METHODS
117
It can be concluded that the chlorine content of the coal is in good correlation with the amount of HCl evolved from the coal. Also it can be concluded that generally coals composed of smaller particle sizes evolve HCl at lower temperatures. HCl is released in three distinct regions. The first is due to HCl adsorbed on pore walls. The second represents more tightly bound HCl, and the third is the result from inorganic chlorides. Thermal analysis can be used to determine the components of combustion products in coal. Knowing the combustion products can help improve efficiency and increase environmental awareness. At the North American Thermal Analysis Society meeting in the fall of 1989, S. A. Mikhail and A. M. Turcotte initially proposed using TGA techniques for fly ash analysis [3]. The Mikhail and Turcotte method had shortcomings including low carbon percentage determination and uncertainty associated with the decomposition mechanism of CaSO4, which is used to determine the sulfur content of the fly ash. This new method refines the Mikhail and Turcotte TGA method to alleviate some of its drawbacks. Fly ash is a multi-component residue composed of carbonaceous material moisture, CaCO3, Ca(OH)2, CaSO4, and ash. The ASTM methods are not capable of determining multiple components in fly ash simultaneously; thus, the use of ASTM methods are time consuming and tedious. There are two key differences between the new method and the Mikhail-Turcotte method [3]. The Mikhail-Turcotte method burns the carbon in air prior to decomposing the CaCO3 in nitrogen. Because the carbon burns first in the Mikhail-Turcotte method, there is excess CO2 in the atmosphere. This CO2 can then combine with CaO to generate additional CaCO3. Because the two reactions overlap, the apparent carbon percentage using the Mikhail-Turcotte method is lower than the actual percentage. The new method converts CaO into CaCO3 prior to combusting the coal. This eliminates CaO from adsorbing CO2 produced by the combustion of the coal. This is rather insignificant for analyzing bed ash as the MikhailTurcotte method was developed, while it is very important for analyzing carbon-rich fly ash. The second difference in the two methods is the proposed decomposition mechanism of CaSO4. The new method shows a mechanism that CaSO4 in a H2 atmosphere reduces to CaS, while the Mikhail-Turcotte method has uncertainty in the decomposition mechanism. This conclusion was drawn from the TG/FTIR results shown in Figure 10.
Figure 10. TG-FTIR Using New Method to Show Sulfur Reduction Mechanism
118
SEN LI, NATHAN WHITELY, WEIBING XU AND WEI-PING PAN
This is important because by measuring the residue of CaS, the initial sulfur content can be calculated, a limitation of the Mikhail-Turcotte method. Thermal analysis techniques provide a faster method that provides precise results as shown in Figure 11, a comparison of the TGA method results and the ASTM results.
Figure 11. Comparison of Sulfur Determination By New Method and ASTM The new method can accurately determine six components of an ash sample simultaneously. Coal combustion occurs as a two-step process. By modeling the combustion using Arrhenius relationships and autocatalytic reaction behavior, the rate constants for different ranks of coals can be generated [4]. The kinetic constant of low-reactivity combustibles is much smaller than the high-reactivity combustibles as seen in Figures 12 and 13.
Figure 12. Kinetic Constants for High-Reactivity Combustibles
CHARACTERIZATION OF COAL BY THERMAL ANALYSIS METHODS
119
Figure 13. Kinetic Constants for Low Reactivity Combustibles Although both the high and low-reactivity constants show temperature dependence, that of low reactivity combustibles is lower. The ignition temperature increases with decreasing volatile matter of coal as seen in Figure 14.
Figure 14. Ignition Temperature as a Function of Volatile Matter Content The previous applications show how valuable TGA and evolved gas analysis are to the study of various coal systems. TGA provides a very rapid and precise method that most typically is accurate in predicting trends seen in large-scale applications providing a cheaper route for industry. It is very important that TGA’s shortcomings be fully understood so the instrument is not used for applications that TGA is not capable of performing.
120
SEN LI, NATHAN WHITELY, WEIBING XU AND WEI-PING PAN
References 1. Wei-Ping Pan, Yaodong Gan, Mohamad A. Serageldin, "A Study of Thermal Analytical Values for Coal Blends in Air Atmosphere," Thermochemica Acta, 1991, 180, 203-17. 2. J. Napier, J. Heidbrink, J. Keene, H. Li, W.P. Pan, J. T. Riley, "A Study of On-Line Analysis of Chlorine During Coal Combustion," Amer. Chem. Soc., Fuel Div. Preprints, 1996, 41(1), 56-61. 3. H. Li, X. Shen, B. Sisk, W. Orndorff, D. Li, W.P. Pan, J. Riley, "Studies of Fly Ash Using Thermal Analysis Techniques," J. of Thermal Analysis, 1997, 49, 943-51. 4. Y. Chou, S. Mori, W. P. Pan, "Estimating the Combustibility of Various Coals by TG/DTA," Energy & Fuels, 1995, 9, 71-74.
Characterization of polymer materials using FT-IR and DSC techniques Pere Pagès Departamento de Ciencia de Materiales, Universitat Politècnica de Catalunya. Colom, 11, 08222-Terrassa. Spain
[email protected] 1. Introduction In this chapter the autor realise various studies about the structural characterisation of polymer materials, using the Infrared spectrophotometry (FT-IR) and the Differential Scanning Calorimetry (DSC) techniques. Results obtained by means of both techniques are in all the cases complementaries in order to achieve a better comprensiveness about the structure of the polymers. 2. Part one: FT-IR and DSC study of HDPE structural changes and mechanical properties variation when exposed to weathering aging during canadian winter This work studies the influence of the climatic conditions during the Canadian winter using high-density PE (HDPE) samples exposed to weather conditions during different periods of time. Under these conditions, it is reasonable to think that the chemical changes caused by the climatic degradation will at first sight be due to photochemical reactions (sunlight) and, to a lesser extent, to hydrolytic reactions (environmental moisture). FT-IR was used to study the microstructural changes. Similarly, many of these microestructural modifications are thought to originate some changes in the crystalline content. As is well known, crystallinity is closely related to the macroscopic properties of the polymer and this knowledge is fundamental in engineering applications. Therefore, another objective of this work is the study of the crystallinity variation of the polymer subjected to drastic environmental conditions by using two techniques: FTIR and DSC. The variation of the mechanical properties of HDPE exposed to the indicated climated conditions was also analyzed. 2.1.
Experimental
2.1.1.
Materials
HDPE (HDPE 2909, Du Pont Canada) is a thermoplastic polymer with the following properties: density 960 kg/m3 and melting flow index 1.35 g/min. 2.1.2.
Preparation of HDPE Samples
Samples were prepared in a mold according to ASTM D-638 (type V). The HDPE, previously milled and screened, was compacted in a mold at 3 MPa pressure for 20 min at room temperature. Then it was heated to 1500C and pressure increased to 3.2 MPa for 20 min. Demolding was accomplished by slowly quenching the mold until room temperature to prevent bubble formation.
122
2.1.3.
PERE PAGÈS
Environmental Conditions
Samples were weather exposed during different periods of time: 0, 15, 30, 60 and 90 days. Figure 1 shows the daily data about minimum and maximum temperatures during the time studied. In addition to the low temperatures there is a very accentuated difference of temperature between day and night (i.e., thermal fatigue). As a result, treatments can be considered as doubly drastic. 2.1.4.
Analytical Techniques
Microstructural changes in the HDPE were determined by FTIR spectrophotometry. In relation to the variation of crystallinity, two instrumental analytical techniques were used: FTIR and DSC. FTIR Spectrophotometry A Nicolet 510 M with CsI optics was used to obtain the FTIR spectra. The method to prepare the samples consisted of dispersing the surface of the finely divided sample (9 mg) in a matrix of KBr (300 mg), followed by compression at 167 MPa to compact the pellet. To evaluate microstructural changes undergone by the HDPE samples, the pertinent spectra were obtained for each of the treatment periods (from 0 to 90 days). Based on the spectral recordings, called "basic," the variations that occurred were analyzed: formation/disappearance, increase/decrease, and displacement of the various bands. To evaluate these differences, subtraction between the various spectra was used by enlarging the spectrum zones that give better information, 775-1540 and 1600-1800 cm [1].
Figure 1. Maximum and minimum values of HDPE temperature during the exposure period to environmental conditions. These zones include and surpass the spectral zones used in previous works to study the PE aging [2]. Concerning the crystallinity, the literature indicates that it was first determined by the ratio of the absorption intensities at 1303 cm-1 corresponding to the amorphous phase of the solid and melted states [7]. This method implies incertitude as the absorption coefficients in both states are not the same. A procedure based on a universal calibration constant and on the measure of the absorption intensity at 1303 cm-1 of the amorphous phase was later developed. Unfortunately, this method can only be applied if the film density and thickness are known [8].
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
123
Zerbi et al. [9] recently suggested the use of spectral bands corresponding to the bending vibrations: 1474 and 730 cm-1 (crystalline phase) and 1464 and 720 cm-1 (amorphous phase). This procedure was selected for this work as the preparation technique in pellets provides intense absorptions in such spectral bands while the absorption at 1303 cm-1 gives low intensity due to the high crystallinity of the HDPE. Therefore, mainly two spectral zones were analysed: 600-800 cm-1 (containing the bands 720 and 730 cm-1) and 1400-1550 cm-1 (containing the bands 1464 and 1474 cm-1).
DSC This technique was used to support the results of crystallinity obtained by FTIR. A DSC 30 Mettler analyzer, with liquid nitrogen, capable of reaching a maximum sensitiveness of 0.4 mJ/s per each 100 divisions of the recording paper was used to obtain the thermograms. The sample weight varied between 2.0 and 3.0 mg. Weights sufficiently small were selected to prevent heat transfer problems, such as was already proved in previous thermogravimetric studies [10]. The heating rate was 20 K/min, which means a balanced compromise between the measuring speed and the peak resolution. The temperature range analysed was 50-200ºC. The temperature and energy calibration was achieved by means of In, Pb, and Zn standards, under identical analytical conditions of the HDPE samples. 2.1.5.
Mechanical Properties
The tensile strength, elasticity modulus, and impact energy were determined by standardized procedures to study the influence of aging on the mechanical properties. 2.1.6.
Analysis of Chemical Changes
With the spectral subtractions of the different samples exposed during various periods of time, taking as reference the nondegradated HDPE sample, tables were prepared in which the most relevant results are shown. Table 1 shows the most significant bands studied and their distinctive functional group. Table 2 defines the behavior of all the bands specified that are listed according to generated, transformed, or invariable functional groups as the result of the exposure undergone during a period of 15 days. The progress of the microstructural phenomena of configuration as the exposure time increased is shown in Table 3. A comparative study with the results found by D'Esposito and Koening [2] shows a significant coincidence in most of the bands studied, although some discrepant bands also occur. These discrepant values are corroborated by characteristic and original bands pertaining to the region of 1700-1800 cm-1 that were not included in the abovementioned work, even though these bands are confirmatory of groups obtained within the range 750-1425 cm-1.
124
PERE PAGÈS
Table 1. Spectral FTIR Bands Studied
Wave Number
Functional Groups
Type of Vibration
-1
(cm ) (1)
900
RR’C=CH2
C-H rocking
(2)
909
RCH=CH2
C-CH2 out of plane bending
(3)
971
(trans) R’CH=CHR
(4)
990
RCH=CH2
(5)
1068
RCH2-CHOH-CH2R’
(6)
1131
RCH2-COH(CH3)-CH2-
C-O stretching,corresponding to a tertiary alcohol
(7)
1177
-CH(CH3)2
C-C stretching and C-C-H bending
(8)
1368
-C(CH3)3
Doblet in C-H bending
(9)
1360
-CO-CH3
-CH3 symmetric vibration in a ether
(10)
1375
-CH3
(11)
1410
RCH2-CO-CH2R
-CH2- scissoring
(12)
1653
(cis) R’CH=CHR
Terminal bond vibration where R and R’ are alkyl chains
(13)
1692
R-CO-OR’
C=C stretching where R and R’ are vinyl groups
(14)
1738
R-CO-OR’
C=O stretching where R and R’ are alkyl groups
=C-CH bending, R and R’ are alkyl groups =C-H out of plane bending, related to (2) C-O stretching, corresponding to a secondary alcohol, R and R’ are groups with insaturations
C-H symmetric bending
125
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
Table 2
Functional Groups Resulting from HDPE Aging for 15-Day Exposure Time (by FTIR) Spectral Bands (cm-1)
Generated Groups
(7) (8) (9) (10) (11) (13) (14)
1177 1368 1360 1374 1410 1692 1738
-CH(CH3)2 -C(CH3)3 -CO-CH3 -CH3 R-CH2-CO-CH2-R’ R-CO-OR’ R-CO-OR’
(2) (12)
909 1653
R-CH=CH2 RHC=CHR’ (cis), terminal insaturations related to (2) Unchanged Groups
(1) (3) (4) (5) (6)
900 971 990 1068 1131
R’RC=CH2 (trans) RCH=CHR’ RCH=CH2 RCH2-CHOH-CH2R’ -CH2-C(CH2R)OH-CH2
Table 3 Types of Spectral Bands Resulting from HDPE Aging for 15, 60, and 90-Day Exposure Time to Environmental Conditions (by FTIR) 15 Days Intensity Increase, Generated groups (cm-1)
Intensity Increase, Transformed Groups (cm-1)
Unchanged Intensity (cm-1)
900, 1177, 1368, 1375, 1360, 1410, 1692, 1738
909, 990, 1953
971, 1068, 1131
60 Days 900,971,1368,1375,1360, 1131,1692,1738
909,990,1068,1683
1177,1410
90 Days 900,971,1368,1375,1360, 1131,1692,1738
909,990,1068,1653
1177,1410
126
PERE PAGÈS
On comparing the results of the first 15 and 30 days of exposure, the trend followed by the increase bands (generation of groups) and the decrease bands (transformation of groups) remain unchanged. Bands at 900 and 990 cm-1 initially belonging to the invariant group are incorporated, respectively, to the increase and decrease blocks. Results at 60-days degradation confirm the previously mentioned values. The characteristic block of transformed groups was increased by the incorporation of an invariant band while the characteristic block of generated groups did not change. In view of these results, it was confirmed that the microstructural changes undergo alterations during the first 60 days of exposure, thus becoming stable. The results confirm the microstructural configuration modifications occurring in the polymeric chains. Such modifications are defined by a series of mechanisms involved in the HDPE degradation: 1. chain breaking with the formation of characteristic groups (methyl, terbutyl, iaopropyl, and end insaturations) due to homolytic and heterolytic dissociations reflected in the positive evolution banda (900, 1177, 1368, 1375, and 1678 cm-1); 2. chain branching with generated groups defined by bands 1177, 1368, and 1375 cm-1 and with transformed groups (909, 990, and 1653 cm-1) that confirm these modifications; 3. crosslinks between polymeric chains caused by additive reactions on double linkages. The characteristic groups defining this type of modification appear in the negative evolution bands (909 and 1653 cm-1); 4.oxidation phenomena defined by the positive evolution bands (1760, 1410, 1692, and 1675 cm-1) and involving the formation of peroxides, alcohol groups, and carboxylic groups. To summarize this study, Table 4 shows the evolution of the different characteristic bands that define every phenomenon occurring in the configurational variations and their relationships to the various exposure times. 2.1.7.
Crystallinity Variation
The empirical relation proposed by Zerbi et al. [9] was used to evaluate crystallinity: 1 − I a / Ib X = 1.233 .100 I 1+ a Ib where X is the percentage of the amorphous content, Ia and Ib the intensities of absorption in the bands of 730 and 720 cm-1 or, alternatively, at 1474 and 1464 cm-1, respectively. The constant 1.233 corresponds to the relations of intensity bands of fully crystalline HDPE. Table 5 illustrates the variation of the HDPE amorphous and crystalline content as a function of the exposure time and the calorimetric features (initial and final melting temperatures and the melting enthalpy). Results show relevant discrepancies according to the spectral bands selected for the evaluation of the content in both the amorphous and crystalline phases. The intensity ratio 730/720 cm-1 leads to random results for crystallinity without the possibility of establishing the level of degradation in terms of
127
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
its duration. In addition, the crystallinity varies in the range 71.5-76.6%, values too low for the HDPE, which is a highly crystalline polymer. The density and also the melt flow index of HDPE confirm that it is a material prepared with a Phillips-type catalyst, whose crystallinity is around 90% [11]. The conclusion from these facts is that the measurement of crystallinity through the bands 730/ 720 cm-1 is not adequate. On the contrary, the bands 1474/1464 cm-1 indicates that the crystallinity progressively decreases, while the weathering exposure time increases, from 98 to 95%. Moreover, these values are rather similar to the HDPE ones of this study. On the other hand, the calorimetry study that follows supports the validity of using the bands 1474/1464 cm-1. In effect, from the thermograms obtained it is clear (see Table 5) the melting enthalpy decreases (i.e., crystallinity decrease) as the exposure time increases. This article has demonstrated that aging is caused by specific chemical transformations undergone by the polymeric chains. Table 4 HDPE Microstructural Variations as a Function of Exposure Time (by FTIR) Days 15 30 60 90
15 30 60 90
900 inv + ++ ++
900 cm-1 inv + ++ ++
Chains Breaking 971 cm-1 1177 cm-1 inv + inv + + ± ++ ±
1368 cm-1 ++ ++ ++ ++
909 ------
Chain Branching 971 990 1177 inv inv + inv + + ± ++ ±
1368 ++ ++ ++ ++
Oxidation Phenomena 1360 cm-1 1410 cm-1 1692 cm-1 15 + + ++ 30 ++ + +++ 60 +++ ± +++ 90 ++ ± ++ Inv, unchanged; (+) increase; (-) decrease; (±) slight increase.
1374 cm-1 inv + ++ ++
1374 inv + ++ ++
1653 ------
1738 cm-1 ++ ++ ++ +++
These reactive phenomena decrease the linear character of the polymeric chains caused by the formation of bulky groups, which leads to an increase of the amorphous content. Therefore, the crystallinity results found by DSC are in accordance with those recorded by FTIR in the bands 1474/1464 cm-1. Figure 2 proves the actual linear relation between the melting enthalpy and the HDPE crystallinity degree. This correlation indicates that per each 1% of less crystallinity due to the aging process, the melting enthalpy decreases by 3.8 J/g. By extrapolation of the straight line at 100% crystallinity, a melting enthalpy of 229.0 J/g is obtained, corresponding to the fully crystalline HDPE.
128
PERE PAGÈS
Table 5 Variation of HDPE Amorphous and Crystalline Contents (by FTIR), Temperatures, and Melting Enthalpy (by DSC) as a Function of Exposure Time
Ia= 730 cm-1 Ib= 720 cm-1
Spectral Bands Ia=1474 cm-1 Ib=1464 cm-1
Initial and Melting Melting Exposure Amorphous Crystalline Amorphous Crystalline Temperatures Enthalpy (ºC) (J/g) Time Content Content Content Content (Days) (%) (%) (%) (%) 0 28.10 71.90 2.02 97.98 126.7-149.8 221.4 15 28.52 71.48 3.76 96.24 127.3-145.9 215.3 30 26.98 73.02 4.16 95.84 127.1-145.2 212.8 60 26.35 73.65 4.80 95.20 126.3-145.7 211.2 90 27.38 72.62 4.88 95.12 123.6-142.1 210.7
Figure 2 Relation between melting enthalpy and crystallinity of HDPE. Table 6 shows the evolution of the mechanical properties studied. The decrease of the impact energy caused by the formation of bulky groups, imparting stiffness to the polymeric chains, is remarkable. The rest of the mechanical properties evaluated (tensile strength and Young's modulus) do not vary significantly with exposure, something to be expected as their characteristics basically depend on crystallinity. To be noted is that the HDPE crystallinity varies not more than 3% (environmental exposure of 90 days). 2.2.
Conclusions part one
HDPE undergoes aging when submitted to drastic climatic conditions such as the Canadian winter: low temperature and sharp temperature changes between day and night (i.e., intense thermal fatigue). This aging becomes apparent by a series of chemical changes in the polymeric chains and a progressive decrease of HDPE crystallinity if the weathering exposure time increases.
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
129
The study of the spectral bands of samples degradated during different weathering exposure times demonstrated the existence of a series of microstructural modifications: chain breaking, chain branching, crosslinking, and oxidation. These configuration changes obviously influence the polymer crystallinity that was evaluated by quantifying the absorption intensity FTIR in two spectral bands: one characteristic of the amorphous phase and another of the crystalline phase. Two zones of the spectrum corresponding to vibrations of deformation, 730/720 cm-1 and 1474/1464 cm-1 were analyzed. The use of the bands 1474/1464 cm-1 was appropriate for the evaluation of crystallinity while the bands 730/720 cm-1 yielded random and too low results (71-77% with respect to the values that should be obtained for HDPE prepared with a Phillipstype catalyst, usually higher than 90%). Consequently, the use of the latter bands to evaluate the percentage of the polymer crystalline character was rejected. The results of DSC ratify the results accomplished by FTIR at 1474/1464 cm-1, as the melting enthalpy, and therefore crystallinity, decrease with the weathering exposure time. Similarly, a linear relation between the melting enthalpy and crystallinity was made evident in such a way that 1% less crystallinity involves a 38 J/g decrease of the melting enthalpy. By application of this straight line and extrapolation at 100% crystallinity, a value of 229.0 J/g for the melting enthalpy was found, which should correspond to an ideal, fully crystalline HDPE. The property most affected by aging phenomena is the impact energy owing to the stiffness of the polymeric chains. The other mechanical properties evaluated (tensile strength and elasticity modulus) remain almost constant as they basically depend on the crystalline content of the polymer that decreases approximately 3% after 90 days of weathering exposure.
Table 6 Variation of HDPE Mechanical Properties as a Function of Exposure Time Exposure Time (Days) 0 15 30 60 90
Tensile Strength (MPa) 23.1 24.5 22.6 22.6 23.6
Young’s Modulus (GPa) 1.46 1.49 1.53 1.52 1.40
Impact Energy (mJ) 80.6 70.2 64.9 41.9 37.7
3. Part two: natural and artificial aging of polypropylene-polyethylene copolymers It is known that exposing polymeric materials to environmental and artificial atmospheres changes their properties and external appearances with some modification of their surfaces. Several chemical reactions, induced by irradiation with sunlight, take place because of the chromophoric groups present in the polymer and the consequent ability of the polymer to absorb ultraviolet light Photoreactions are usually induced when polymer materials are damaged, causing embrittlement and color changes [12-14]. For the prevention of these phenomena, several methods have been developed. One of the most important is stabilizing the polymers with additives (i.e., antioxidants and light stabilizers).
130
PERE PAGÈS
To develop effective formulations, we must first know the causes and mechanism of the degradation. Fourier transform infrared (FTIR), differential scanning calorimetry (DSC), and scanning electron microscopy (SEM) are techniques used to evaluate the degradation process of polymeric materials. One of the earliest works performed with FTIR spectrophotometry was based on a study of PFT films exposed to high temperatures [15]. More recently, several studies on the artificial aging of different polymers were carried out, and their effects were quantified by FTIR spectrophotometry and photoacoustic FTIR [14, 16-18]. In this article, we report on aging-induced changes in the structural and thermal properties of polypropylene (PP)-polyethylene (PP)-based copolymers that were used as seats in the Olympic stadium of Barcelona, Spain. The samples were exposed to natural aging by weather for 2.5 years and to artificial aging by exposure to radiation from a xenon lamp for 5000 h, which is considered to be equivalent to 2.5 years of environmental exposure [19]. 3.1.
Experimental
The analyzed material was a PP-rich (~95 wt %) PP-PE copolymer manufactured by Repsol and marketed as PB 140. It was a block copolymer with short PE chains grafted to PP. The composition (95% PP) was chosen for its mechanical properties, that is, the high degree of toughness. Several additives were added to the base material. Table 7 shows a list of the various samples, which differed in the types and quantities of the additives employed. The additives were antioxidants (Tinuvin 770, Irganox BZ15, Bioxid Kronos CL 2220, and Quimasorb 944) and colorants (blue Cromoftal A3R, red Cromoftal DPP-BO, violet Cinquasia R RT 891D, Iagacolor 10401, Byferrox 110, and Iagacolor 415). Table 7. Description of the Additives Present in the Studied Copolymers Sample
Additives
A
Tinuvin 770, Igamoz BZ15, Quimasorb 944, Cromoftal A3R, Bioxid Kronos CL 2220
B
Tinuvin 770, Quimasorb 944, Cromoftal DPP-BO, Cinquasia R RT 891D
C
Tinuvin 770 Igarnoz BZ15, Quimasorb 944, Iagacolor 10401 Bioxid Kronos CL 2220
D
Tinuvin 770, Quimasorb 944, Iagacolor 415, Byferrox 110
The coupled samples A-B and C-D were prepared with different combinations of coloring additives according to the main industrial process. Samples A and B were subjected to artificial aging performed in a xenon test 450 chamber, with a xenon arc lamp as the radiating light source for the simulation of natural aging. The artificial aging
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
131
time was 5000 h (equivalent to 2.5 years of natural aging). The samples were labeled as A-5000 and B-5000, respectively. Samples C and D were aged under natural climatic conditions for 2.5 years and were labeled C-2.5 and D-2.5, respectively. The structural changes and thermal properties of the PP-PE copolymers were measured with FTIR spectrophotometry and DSC techniques. A Nicolet 510M spectrometer with CsI optics was used to obtain the FTIR spectra, with a resolution of 2 cm-1 and with an average of 50 scans. Pellets were prepared by the dispersion of the surfaces of finely divided samples (3 mg) in a matrix of KBr (300 mg), followed by compression at 167 MPa. To determine changes in the FJJR spectra, several authors [14, 18, 20] chose a particular band as a reference to avoid deviations between spectra produced by samples of different weights or thicknesses. The absorption related to this band is known as the reduced or compensated absorption. In this work, the spectral reference band chosen was at 2840 cm-1 and was due to methylene symmetric stretching vibrations. The thermal behavior of the samples was analyzed with DSC. The measurements were made with a Mettler TA4000 thermoanalyzer coupled with a lowtemperature (nitrogen coil) DSC 30 apparatus. The calibration of the temperatures and energies was made with standard samples of In, Pb, and Zn under the same conditions used in the sample analysis. The measurements were made with dry air as a purging gas at a flow rate of 20 mL/mm. The heating rate was 10 K/min, a good compromise between the measurement rate and the endothermic melting peak resolution. The sample mass was about 2.5 mg, small enough to prevent the problems caused by heat- and mass-transfer limitations. Several experiments were performed to ensure the reproducibility of the results, with the samples heated to 600, 300, or 1850C. The microstructures of the samples were characterized by SEM in a Zeiss DSM 960 A apparatus. The resolution was 3.5 nm, the acceleration voltage was 15 kv, and the working distance was 10-20 mm. The samples were sputtered previously with C with a K 550 Emitech instrument. The aim was to observe microstructural changes arising from the degradation phenomena related to the aging processes, as well as the effects of the thermal treatments applied to the material. Table 8. Variation of Reduced Absorbance Avarage Values of All Samples as a Funtion of Artificial Aging and Natural Aging.
132
3.2.
PERE PAGÈS
Results and discussion
We analyzed the characteristic spectral bands of the polymer that were more greatly modified during the aging process. The results are shown in Table 8. The chemical groups related to these bands are carbonyl groups (1735 cm-1), ether (1167 cm1 ), nonsaturated bonds (1650 cm-1), methylene (1460 cm-1), and methyl (1378 cm-1). There are also three bands related to the structural characteristics: configuration isomerism and conformational order (812, 901, and 976 cm-1) The evolution of these latter bands allowed us to determine structural and configurational changes.
Figure 3. FTIR spectra of samples C and D exposed to natural aging in the spectral range 1025-1380 cm-1 3.2.1.
Study of Natural Aging by FTIR
Figure 3 shows the spectral range 1025-1380 cm-1 for samples C and D exposed to natural aging. The 1168 cm-1 band, associated with the C-O-C group, shows a tendency to increase in aged samples, suggesting the formation of ether groups. Moreover, the FTIR spectrum of sample D-2,5 has three bands at 1235, 1194, and 1080 cm-1 (marked by arrows) and two shoulders, one at 1294 cm-1 and another at 1154 cm-1 (marked by arrows), associated with transitions of the crystalline phase. These spectral differences between the aged and nonaged samples show that the crystallinity of sample D was more affected by the aging process. Sample C was less affected by aging, showing only small spectral changes (the 1235 cm-1 band and the shoulder at 1154 cm-1). Figure 4 shows the spectral range 750-1050 cm-1 corresponding to samples C and D not aged and naturally aged. The absorbance of the 812-, 976-, and 998-cm-1 bands, related to conformational and configurational changes, is higher in aged samples than in nonaged ones. Moreover, the presence of the bands marked by arrows makes it evident that some changes were produced. The spectral band at 901 cm-1 moved to 912
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
133
cm-1, and this short shift was attributed to the background changes generated by conformational modifications. Likewise, a small shoulder appears at 941 cm-1, and two peaks appear at 1015 and 1035 cm-1 (associated with the stretching vibrations of -C-O-C and –C-O-H, respectively). Both peaks appear as a result of the oxidative process that originated during natural aging. 3.2.2.
Study of Artificial Aging by FTIR
Samples A and B were subjected to artificial aging in the xenon test chamber for 5000 h. The spectra of aged and nonaged samples are shown in Figures 5 and 6. The changes detected in both materials (A and B) are very similar. Nevertheless, these changes are not as important as they are for the materials subjected to natural aging for 2.5 years (C and D). The spectra corresponding to the samples artificially aged (A and B), illustrated in Figure 4, show that degradation phenomena are lower than in the previous case. In the artificially aged samples, only a slight increase in the band absorbances associated with the configuration order was detected. We can conclude, therefore, that exposure to weather leads to more aggressive degradation than that produced by exposure to the homogeneous conditions employed in the xenon test chamber in our study. 3.2.3.
Comparison of Natural and Artificial Aging by FTIR
A comparative study of both types of aging processes (natural and artificial) is shown in Figure 7. An increase of the ester and ketone groups (1735 and 1717 cm-1) was detected. This increase was greater in the materials subjected to natural aging. Samples C were the most affected by carbonyl group formation.
Figure 4 FTIR spectra of samples C and D exposed to natural aging in the spectral range 750-1050 cm-1. Bands marked with arrows are related to large spectral changes.
134
PERE PAGÈS
Figure 5 FTIR spectra of samples A and B exposed to artificial aging in the spectral range 1025-1380 cm-1
Figure 6 FTIR spectra of samples A and B exposed to artificial aging in the spectral range 750-1050 cm-1. Bands marked with arrows are related to large spectral changes.
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
135
Comparing samples C and A (Fig. 7), which only differed in the pigment used and the type of aging, we can observe that natural aging is far more aggressive than artificial aging. The formation of ether groups related to the 1168-cm-1 band (Figs. 3 and 5) is not clear in the artificially aged samples This difference in the evolution of these groups must be influenced by parameters difficult to reproduce in artificial aging in a xenon test chamber (e.g., sudden changes of temperature and humidity, rain, and sea proximity). Furthermore, it is also difficult to reproduce the interactions of these parameters with the pigment used and the ability to react at high temperatures, mainly with sulfurous compounds [21] arising from pollution.
Figure 7 Reduced absorbance results of spectral bands associated with the carbonyl group in all aged samples. There are several bands related to the double bonds. The one that can best help to define the evolution of the aging process is the 1650-cm-1 band, associated with the stretching vibrations of -C=C- The tendency of this band to diminish in the aged samples, as shown in Table 8, is similar in all samples. Nevertheless, it is remarkable that samples artificially aged presented a decrease of 46%; compare this to 33% for the naturally aged ones. The double bonds are generated in the first steps of the aging process, reacting subsequently to produce branching, crosslinking, or both. The samples exposed to natural aging had a lower percentage of double bonds but higher crosslinking. That is reflected in the configurational changes shown by the evolution of the main bands associated with the tacticity of PP (976, 901, and 812 cm-1) [22] in Figures 4 and 6. The decline of every band, shown in Figure 8, occurred for all (naturally or artificially) aged samples. The spectral band most sensitive to the changes in the configuration order is the 976-cm-1 band, associated with ν(CC) in the chain and with yr(CH3).
136
PERE PAGÈS
Figure 8 Reduced absorbance corresponding to spectral bands associated with configurational isomerism (976, 901, 812 cm-1) for all aged samples. Likewise, the tendency of the methyl (1378 cm-1) and methylene (1460 cm-1) groups to diminish was observed in all samples and in both aging processes. The results are given in Figure 9, which shows the evolution of the 1460- and 1378-cm-1 bands. The greatest decrease occurred in the naturally aged ones (being more important in C than in D). The breaking of the group combined with the carbon in the natural aging process is widely accepted because this labile carbon easily becomes a free radical through elimination reactions [23, 24] when ultraviolet radiation is present. Moreover, in most of the degradation processes of polyolefins, there is a decrease in the number of methylene groups these phenomena help to confirm that macromolecular chains suffer homolytic and heterolytic breakage [25]. In this work, as shown in the results listed in Table 8 and Figure 9, a methylene decrease was also detected.
Figure 9 Reduced absorbance corresponding to spectral bands associated with methyl and methylene groups (1460 and 1378 cm-1) for all aged samples
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
137
Figure 10 DSC curves corresponding to newly manufactured samples (A, B, C and D) and the aged samples (A-5000, B-5000, C-2.5, and D2.5). A chain break leads to the generation of terminal methyl groups, of which an increase would be expected, but our data revealed a decrease. This apparent contradiction is due to the fact that the number of methyl groups generated by chain scission is lower than the number of methyl groups that disappear in the formation of free radicals in tertiary carbons. The radicals generated present a high reactivity and provoke the rapid formation of nonsaturated bonds, which act as precursors in the formation of branching and crosslinking processes. This transformation in the configurational order also appeared in samples A and B, although it was always present as a minor change. 3.2.4.
Comparison of Natural and Artificial Aging by DSC and SEM
Figure 10 shows the DSC thermograms of the samples, and Table 9 shows the characteristic thermal parameters of the melting and degradation processes. In the calorimetric study performed by DSC (heating up to 6000C), several processes associated with melting and thermal decomposition (beginning after fusion) were detected. As shown in Table 9, the decomposition process of the naturally aged samples (C and D) began at lower temperatures than in other samples. The difference between the melting onset temperature (T0) for the decomposition process of artificially aged samples A and B and the nonaged samples was as low as 1-20C, whereas the difference was greater for C and D samples (natural aging), being as high as 25-27ºC. These differences also existed in the melting peak temperature (Tp) for the melting process. The decrease in the decomposition and melting temperatures is associated with shorter polymeric chains and a lower thermal stability of the material. In the naturally aged samples (C and D), the melting enthalpy presented a clear decrease (ca. 40%) in comparison with the nonaged ones (Table 9). The enthalpy decrease was intrinsically associated with a loss of the crystalline fraction of the copolymer, as previously shown by FTIR.
138
PERE PAGÈS
Table 9 Characteristics of Melting and Thermal Decomposition Processes
The calorimetric changes detected in the artificially aged samples (A and B) were lower than those observed in naturally aged ones. The melting enthalpy (ΔH) diminished by about 14% in the A samples and by about 4% in the B samples. Previous studies on the crystallization kinetics of samples A and B led to the conclusion that these processes may be very sensitive to the presence of additives. These results are confirmed in this work through the spectrophotometric and calorimetric studies. Nevertheless, in samples C and D, natural aging produced modifications of a larger magnitude than the use of different additives did. The superficial morphology of aged samples depends on the type of aging. The surface of the samples aged by weather presented a scale structure, whereas the artificially aged surfaces preserved the original morphology, with the exception of several modified zones. Another difference was detected when the samples were subjected to a thermal treatment followed by cooling to the crystallization temperature. The structure after the solidification process became porous (Fig. 11, top). Samples A and B, newly manufactured or after aging, presented similar morphologies, without preferential orientations, but the crystallization of samples C and D produced a micelle distribution (Fig. 11, bottom). This difference in behavior is related more to the presence of different additives than to degradation. Additive particles can act as nucleation centers of heterogeneous degradation. It appears that macroscopic structure modifications produced by natural and artificial degradation occur on the surface because an analysis of the internal areas of several broken samples did not show any differences between aged and nonaged samples. 3.3.
Conclusions
From the results obtained, it can be stated that natural aging produces configurational and conformational changes of a higher order than artificial aging. Oxidative processes are present in both kinds of aging. The main changes are the formation of carbonyl groups, the scission of hydrocarbonated chains, the formation of free radicals in tertiary carbons, and the initial formation of nonsaturated bonds followed by the progressive participation of these bonds in branching and crosslinking reactions. The differences in behavior between the samples subjected to the same type of aging could be attributed to the presence of different commercial additives. For sample C, the combination of three types of antioxidants led to a synergistic effect that improved its stability against natural aging. Also, the catalytic effect of some colorants appeared in the results obtained.
CHARACTERISATION OF POLYMER MATERIALS USING FT-IR AND DSC TECHNIQUES
139
Figure 11 Micrographs corresponding to B-5000 artificial aging (top) and C-2.5 natural aging (bottom). DSC corroborated several of these results. The Samples naturally aged showed a greater loss of crystallinity than those subjected to the xenon test. The melting enthalpy diminished by about 40% in the naturally aged samples and by only about 14% in the artificially aged samples. The morphology obtained during the crystallization process was clearly different in samples subjected to both types of aging. This fact could be attributed to the additives used and to the differences between the aging processes. The intended equivalence between the two types of aging processes has to be reviewed because the heterogeneous conditions in which natural aging occurs (e.g., environment, salinity content, ram, and pollution) produce a greater and more extensive effect than those of the xenon test.
140
PERE PAGÈS
References 1.
Tabb, D. L. and Koening, J. L. Macromolecules, 8, 929 (1975).
2.
D'Esposito, L. and Koening, J. L. Fourier Transform Infrared Spectroscopy, Vol. 1, Ferraro, J.R. and Basile, L. J. Eds., Academic Press, New York, 1978, p. 73.
3.
Musto, P.; Karasaz, F. E. and Macknight, W. J. Polymer, 34(14), 2934 (1993).
4.
Delprat, P. and Gardette, J. L. Polymer, 34(5), 903 (1978).
5.
Carrasco, F.; Kokta, B. V.; Arnau, J. and Pages, P. Composites, 33(2), 46 (1993).
6.
Pagès, P.; Arnau, J.; Carrasco, F.; and Gironés, J. 6th Mediterranean Congr. Chem. Eng., Barcelona, Book of Abstracts, Vol. II, Ed. Departamento de Prensa y Publicaciones, Fira de Barcelona, Barcelona, 1993, p.729.
7.
Nikitin, V. N. and Pokrovskii, E. L. Doklady Akad. Nauk SSSR, 95, 109 (1954).
8.
Tobbin, M. C. and Carrano, M. J. J Polym. Sci., 24, 93 (1957).
9.
Zerbi, G.; Galiano, G.; Fanti, N. D. and Baini, L. Polymer, 30, 2324 (1989).
10. Carrasco, F. Thermochim. Acta, 213, 115 (1993). 11. Alguer, M. S. M. Polymer Science Dictionary, Elsevier, New York, 1989, p. 344. 12. Kaczmarek, H. Polymer 1996, 37, 189. 13. Carrasco, F.; Pagés, P.; Pascual, 8.; Colom, X. Eur Polym J 2001, 37, 1457. 14. Colom, X.; García, T.; Suñol, J J.; Saurina, J.; Carrasco, F. J. Non-Cryst Solids 2001, 287, 308. 15. Esposito, L.; Koening, J. L. In Applications of Fourier Transform Infrared lo Synthetic Polymers and Biological Macromolecules; Ferraro, 1 R.; Basile, L. J., Eds.; Academic: New York, 1978; Vol. 1, Chapter 2. 16. Caykara, T.; Guven, O. Polym Degrad Stab 1999, 65, 225. 17. Chiaotore, O.; Troasarelli, L.; Lazzari, M. Polymer 2000, 41, 1657. 18. Irusta, L.; Fernández-Berridí, M. J. Polymer 1999, 40, 4821. 19. EN 13206:2001; Annex A, Point 8.10. 20. Romeu, J.; Pagès, P.; Carrasco, F. Rev Plást Mod 1997, 74, 255. 21. Schnabel, W. Polymer Degradation Principles and Practical Applications; Hanser: New York, 1981. 22. Bower, D. I.; Maddams, W. F. The Vibrational Spectroscopy of Polymers; Cambridge University Press: Cambridge, England, 1989. 23. Vollhardt, C. Organic Chemislry; Freeman: New York, 1987. 24. Linstromberg, W. W. Organic Chemistry: A Brief Course; Heath: Lexinglon, MA, 1979. 25. Pagès, P.; Carrasco, F.; Saurina, 1.; Colom, X. J Appl Polym Sci 1996, 60, 153.
Characterization of Polymeric Materials by Thermal Analysis, Spectroscopy and Microscopic Techniques Nathan Whitely, Weibing Xu, Sen Li and Wei-Ping Pan Thermal Analysis Laboratory, Materials Characterization Center, Western Kentucky University, Bowling Green, KY 42101
[email protected] As the world of technology continually drives the scientific community and the development of innovative instrumentation, it is important for the analytical chemist to be certain to take advantage of the wide range of knowledge that can be gained by using multiple modes of analysis. No single instrument is capable of entirely characterizing a material; therefore, the knowledge gained from multiple modes of analysis must be pieced together in order to provide the most accurate description of the sample. Using a single method only provides one dimension, but with the use of additional methods the analysis is multi-faceted. Instrument systems are designed to gather a distinct set of data, with no single system providing complete analysis. By coupling traditional thermal analysis techniques such as thermogravimetric (TGA), thermomechanical (TMA), and dynamic scanning calorimetry (DSC) with spectroscopic techniques such as Fourier Transform Infrared (FTIR), mass spectroscopy (MS), and X-ray diffraction (XRD), all aspects surrounding the materials physical and chemical properties can be determined almost entirely. Specifically the importance of evolved gas analysis (EGA), thermal-IR, XRD, and micro-thermal analysis will be discussed. Evolved gas analysis is the technique used to quantitatively and qualitatively study volatile products formed during thermal degradation by the coupling of thermal analysis instrumentation with other techniques capable of providing structural information. The volatile products, or evolved gases, released as a material is combusted or pyrolyzed are directly related to the chemical pathway of the degradation reaction. Thus, by studying the nature and amount of the volatile products' composition, possible reactions can be mapped out to understand the kinetics of degradation. The two techniques of evolved gas analysis are simultaneous analysis and combined analysis. Examples of simultaneous analysis such as TG/MS and TG/FTIR are noted by examining the same sample during the same period of time. This on-line analysis provides a time dependency in which signals found in the TGA can be directly correlated to signals seen in the MS or FTIR spectrum during the same time period. Conversely, in combined analysis two separate samples are required and no real time analysis is available. The versatile field of evolved gas analysis has two sampling methods. The sampling method in which the gaseous sample is directly introduced into the detector system is known as continuous mode. Both TG/MS and TG/FTIR utilize this type of sampling method, see Figure 1.
142
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
Figure 1. TG-MS and TG-FTIR Systems The alternate sampling technique called intermittent or batch mode collects the gaseous sample at low temperatures or traps the gas sample within an absorbent chamber then releases all of the volatiles into the detector system at the same time. A typical instrument demonstrating an intermittent sampling mode is Pyrolysis/GC-MS. MS offers a very sensitive system that gathers large quantities of structural data specific to each evolved gas species, see Figure 2.
Figure 2. GC/MS System By far, the major advantage of the TG/MS and TG/FTIR systems is the ability to continuously and simultaneously gather both quantitative and qualitative information about the evolved gases; however, disadvantages exist that must be kept in mind such that information collected is not misinterpreted. Overlapping peaks within both the TGA and FTIR data is a major contributor to erroneous analysis. Weak signals are no less significant than stronger signals, but can be masked when located in close proximity to the stronger signals. The use of Hi-Resolution TGA offers an alternate method in separating weak signals hidden by overpowering signals; however, FTIR peaks typically cannot be resolved further from either stronger signals or signal noise. Diatomic molecules, which contain no permanent dipole due to their symmetric geometry, go undetected by FTIR. When determining stereoisomerism is important, MS is not the appropriate instrument. The major disadvantage of both the TG/FTIR and TG/MS systems is the incapability to detect the presence of high molecular weight
CHARACTERIZATION OF POLYMERIC MATERIALS BY THERMAL ANALYSIS, …
143
compounds. It is important to note the disadvantages in order to properly choose a complimentary analysis technique that will provide the desired information. The Pyrolysis/GC-MS system helps alleviate the problem with detection of high molecular weight compounds that must be sacrificed with TG/FTIR and TG/MS. On the other hand, Pyrolysis/GC uses an intermittent sampling method that prevents any time and/or temperature dependent gas evolution profiles. The previous section dealt with techniques typically used for analysis of organic materials; the forthcoming section will deal with XRD a technique reserved for the analysis of crystalline or inorganic materials. XRD can be used to either identify a sample by its unique “fingerprint” of its X-ray powder pattern or X-ray crystallography can be used to provide data that is able to provide structural data, specifically how the atoms are packed together in crystalline form, the interatomic distance and angle. Bragg’s Law shown in Figure 3 is the relationship used to interpret XRD data.
Figure 3. Bragg’s Law Coals containing high sulfur and chlorine contents are often blended with municipal solid wastes (MSW) in order to make combustion less environmentally damaging. Common MSW include PVC, newspaper, and cellulose, but the addition of MSW may provide an environment that leads to alternate reaction pathways that may be equally harmful to the environment. TG/FTIR/MS can be used to accurately determine the nature of gas evolution to predict what will happen with coal-MSW blends in industrial-sized combustion chambers. Using a 100oC/min-heating rate, the TGA curve of the blend seen in Figure 4 shows three apparent weight losses, which appear to be the weight losses of individual components.
144
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
Figure 4. TGA Curve of Coal, MSWs, and Blend The first weight loss is due to moisture, the second due mostly to the decomposition of PVC, newspapers, and cellulose, and the third is from the combustion of the coal and carbon residue of PVC. The combustion of the coal can be followed by the CO2 (2230, 670 cm-1) and H2O (3851, 1652 cm-1) FTIR peaks shown in Figure 5, which correspond, to the second weight loss.
Figure 5. TG-FTIR Curve for PVC Figure 6 shows that the MS data agrees with trends developed in the analysis of the FTIR results. Sulfur dioxide appears around 280°C and reaches a maximum at 340°C and another maximum at 420°C corresponding to the third weight loss.
CHARACTERIZATION OF POLYMERIC MATERIALS BY THERMAL ANALYSIS, …
145
Figure 6. TG-MS Curve for Coal-MSW Blend Only moisture (m/z = 18) is evolved at 100°C. At approximately 300°C the fuels begin to decompose noted by the presence of small organic and inorganic molecules such as HCl, benzene and toluene (m/z = 92) from PVC, acetic acid and/or carbonyl sulfide(m/z = 60), furan (m/z = 68), phenol (m/z = 94), and furfural (m/z = 96) from newspaper and cellulose, while carbonyl sulfide (m/z = 68) and sulfur dioxide (m/z = 64) are released from the coal. The maximum rate of this region occurs at 320°C and ends around 370°C agreeing with both the TGA and FTIR data for the second weight loss. Sulfur dioxide and carbon dioxide show a maximum at 420°C, which is the third weight loss. With the evolution of gases such as HCl, chlorine, phenol, furan, and other hydrocarbons during the same temperature range, there may exist a possibility that chlorinated hydrocarbons can be formed. This hypothesis is supported by the presence of chloro-benzene (m/z = 112). Polymers provide a wide range of applicability due the large span in physical properties. The vast extent of the properties of polymers is further enhanced by the addition of inorganic fillers. An unknown polymer sample can be analyzed with multiple methods in order to determine the amount and identity of both the polymer and any inorganic fillers if present. The TGA curve shown in Figure 7 shows that the polymer has a single, broad weight loss due to the degradation of the polymer occurring at 470 oC.
Figure 7. TGA Curve of Unknown Polymer
146
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
The polymer’s residual amount of 37.94% after exposure to 1000 oC shows that the polymer has been filled with some inorganic filler. Using the Hi-Resolution TGA the single, broad peak of the normal TGA can be resolved into two separate weight losses occurring at 367°C and 391°C shown in Figure 8.
Figure 8. Hi-Resolution TGA Curve of Unknown Polymer Because Hi-Resolution TGA separates the peaks by exposing the sample to an isotherm while a weight loss is taking place, the degradation temperatures are actually lower than what would typically be expected. Figure 9 is the DSC curve indicating that the polymer’s melt occurs at 168°C.
Figure 9. DSC Curve of Unknown Polymer The polymer’s identity can be determined by comparison with standard values to most likely be polypropylene. The melt temperature of the sample falls well within the typical melting range of polypropylene and the degradation temperature of the sample is slightly lower than the temperature expected due to the use of Hi-Resolution TGA. XRD was used to analyze the sample so the identity of the inorganic filler could be determined. The peaks from the XRD analysis show that the inorganic filler is Mg(OH)2. The known decomposition reaction of Mg(OH)2 can be used to determine what amount of Mg(OH)2 exists in the initial sample. The largest part of the sample’s residue is MgO. From Figure 7, the residual amount of MgO 37.94% can be used to estimate that the initial sample was 55% Mg(OH)2. This approximation is too high
CHARACTERIZATION OF POLYMERIC MATERIALS BY THERMAL ANALYSIS, …
147
because a fraction of the residue is from the decomposed polypropylene. From the HiResolution TGA, the single broad decomposition can be resolved into two peaks; the first being the decomposition of Mg(OH)2 and the second being the decomposition of the polymer. The weight loss due to the decomposition of Mg(OH)2 is approximately 15.2%. This loss due to the evolution of water can be used to calculate that the initial sample was approximately 49.3% Mg(OH)2. This estimate is too low, because the overlapping decomposition regions of the Mg(OH)2 and the polymer cannot be entirely resolved; thus, from the two estimations it can be noted that the initial sample was polypropylene filled with 49-55% Mg(OH)2. Nanocomposite Materials are a new class of state-of-the-art materials with numerous applications one such being as use as flame-retardants. Nanocomposites are materials in which a polymer chain is intertwined with an inorganic layered silicate, or clay. Three typical structures occur for a nanocomposite. The first is the phase separated in which the clay’s layered structure is intact and the polymer chain simply surrounds the clay particles. The second is the intercalated composite where the clay’s structure is maintained though slightly separated and the polymer winds itself through the many layers of the clay particles. The last possible nanocomposite structure is the exfoliated where the clay’s structure is completely lost and the layers are completely separated dispersed homogeneously throughout the polymer matrix. Figure 10 shows the three general structures of nanocomposites.
Figure 10. General Structures of Nanocomposites One of the most common clays used is montmorillonite. The crystal structure of montmorillonite consists of two-dimensional layers formed by fusing two silica tetrahedral sheets with an edge-shared octahedral sheet of either alumina or magnesia. The unique sheet-geometry of the clay shown in Figure 11 makes it an ideal choice as an inorganic filler.
148
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
Figure 11. Layered Structure of Clays As depicted above these layers stack forming sheets and galleries containing net negative charges due to isomorphous substitutions. These net negative charges are usually balanced by Na+ and Ca2+ ions that are not part of the structure, thus these ions can be exchanged by other positively charged atoms or molecules. Because the clay is hydrophilic and the polymer is hydrophobic the surface of the clay must be modified for successful synthesis to occur. This is carried out by exchanging cationic surfactants for the free cations within the clay sheets as seen in Figure 12.
-
H3C + CH3
-
-
-
+
-
N
N
H3C
-
-
-
N
N
-
-
+
+
Figure 12. Surfactant Confined by Clay Layers The aforementioned process also serves another purpose of increasing the interlayer distance. The layers of the clay are typically on the order of 1nanometer, which is smaller than the diameter of the polymer; therefore, the clay layers must be separated to enable the polymer to be inserted between the clay layers. The clays containing cationic surfactants between their layers are known as organically modified layered silicates (OLS). Regardless of the fabrication method used, elevated temperatures will be utilized to form the polymer layered silicate nanocomposite (PLSN); as a result, the thermal stability of the OLS is vital. If the temperature required to fabricate the PLSN is higher than the degradation temperature of the OLS the interface between the polymer and clay is altered leading to unsuccessful synthesis. The TGA curve of the PLSN can be divided into four regions. Water and physiabsorbed CO2 and N2 are evolved below 180
CHARACTERIZATION OF POLYMERIC MATERIALS BY THERMAL ANALYSIS, …
149
o
C, from 180-500 oC small organic gases are emitted, region III is attributed to the dehydroxylation of clay layers, and above 700oC is the result of carbonaceous residue. The four regions are depicted in the TGA curve, see Figure 13.
Figure 13. DTG Curve of Nanocomposite Many surfactants were studied all showing single step degradation although at numerous onset temperatures. The onset temperatures of the OLS most closely resemble that of the parent surfactant although lowered by 15-25 oC. The decrease in the thermal stability of the OLS is contributed by the Lewis acid sites of the aluminosilicate layers of the clays that catalyze the Hoffman elimination. One notable difference is that the degradation is now multi-stepped as seen in Figure 14. 0.20
2.5
2.0
1.5
2048
0.10
1.0
0.05
Deriv. Weight (%/°C)
Deriv. Weight (%/°C)
0.15
0.5
0.00 0.0 Trimethyloctadecyl Quaternary Ammonium Chloride
-0.05 0
100
200
300
Temperature (°C)
400
-0.5 500 Universal V2.6D TA Instruments
Figure 14. DTG Curve of OLS and Parent Surfactant The absence of CH2 and CH3 stretching from the Thermal-IR shows that at temperatures above approximately 500 oC no alkyl chains are present directly correlating to the decomposition of the surfactant. Note that he hydroxyl stretching seen at 3675 cm-1 in Figure 15 begin to decrease at 250 oC and disappear at approximately 500 oC associated with the dehydroxylation of the octahedral layer.
150
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
3.5
3
Relative Intensity
2.5
2
1.5
1
0.5
600°C 550°C 500°C 450°C 400°C 350°C 300°C 250°C 200°C 150°C 100°C 50°C 25°C
0 5000
4500
4000
3500
3000
2500
2000
1500
1000
500
0
Wavenumber (cm-1)
Figure 15. Thermal IR of Nanocomposite The relative change in the intensity of the IR spectrum is directly proportional to the amount of relative concentration for the group responsible of the IR absorption. The area must be compared rather than the maximum absorption because the IR peaks broaden with increased temperature. Figure 16 shows the correlation between the time rate of change in peak area of the CH2 and CH3 vibrations and the DTG signal, signifying that the weight loss can definitely be attributed to the decomposition of the surfactant.
Figure 16. DTG Curve and Time Rate of Change Dependent IR Integrations The XRD results as a function of temperature shown in Figure 17 show that the interlayer spacing increases at constant rate from 100-370 oC, which is closely related to the maximum rate of mass loss seen in the TGA and FTIR results.
CHARACTERIZATION OF POLYMERIC MATERIALS BY THERMAL ANALYSIS, …
151
Figure 17. d-Spacing as a Function of Temperature This suggests that the initial degradation products are trapped within the layers of the OLS causing the layers to expand and gases are released as the as the interlayer gallery begins to collapse. Isothermal Pyrolysis/GC-MS experiments were performed on the OLS and the parent surfactants at temperatures of 250, 300, 350, and 400 oC. By thoroughly studying the exact identities of gases evolved as a result of exposure to various temperatures a nearly exact, step-by-step degradation pathway can be devised. The reason a multistepped degradation process is seen for the OLS, but is absent in the parent surfactant is due to the confinement of the surfactant within the layers of the clay. This unique geometry retards gas evolution and keeps the products bond very tightly together making the likelihood of further reactions much greater, whereas with the parent surfactant the degradation gases are immediately released. Whether confined or free the initial step of the degradation is a Hoffman elimination of the Į-carbon with respect to the surfactant head group. A summary of the four regions of the thermal degradation of OLS is as follows. Region I occurs from room temperature to approximately 70 oC where the surfactants exist as paraffinic in nature. Region II occurs until about 200 oC where the surfactants melt resulting in a much more mobile liquid-like interlayer. Region III occurs until approximately 350 oC where initial surfactant degradation occurs via a Hoffman elimination reaction. As a result the interlayer expands with gases as well as other long chain molecules not associated with the aluminosilicate layer. Region IV occurs above approximately 380 oC and shows first signs of the interlayer gallery collapsing and the presence of carbonaceous char. Micro-Thermal Analysis (Micro-TA) has the major advantage over standard thermal analytical techniques by providing localized thermal imaging. Typical techniques such as DSC and DMA provide the bulk signal, whereas Micro-TA can provide the thermal information specific to small regions within a single sample. The first mode of analysis provided by Micro-TA is conductivity imaging where the thermal conductivity at regions between 100-10,000 square microns can be differentiated. However, this mode of analysis can become immensely difficult to understand the results when the topography of the sample contains many valleys and peaks. The conductivity signal is proportional to the amount of mass in close proximity to the probe; therefore, the peaks and valleys can produce large anomalies in the thermal
152
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
conductivity data. Another mode of analysis is making localized MDSC and TMA measurements. This allows the thermal properties of various positions on the sample’s surface to be compared. A polypropylene (PP) resin was filled with glass bead of different sizes and different relative amounts. Micro-TA was used to characterize the samples. Thermal conductivity imaging was not an appropriate mode of analysis because of the topography of the polymer matrix; instead, Micro-TMA curves shown in Figure 18 were compared.
Figure 18. Micro-TMA Curves of Filled Polypropylene Resin The general shape of the majority of the curves shows a transition around 130°C. Curves 1, 3, and 9 show a more gradual transition. This is a result of the probe being in close proximity to a bead. The glass bead acts as a thermal sink requiring more energy to effectively raise the temperature of the surrounding polymer. Consequently, the sample’s temperature lags behind the temperature of the probe. Curve 4 is an example of the probe resting directly on a glass bead noted by the gradual expansion at high temperatures. The sharp penetration at the end of this curve occurs because the glass bead and surrounding polymer have acquired enough energy to raise the temperature, so the bead sinks into the polymer. Curve 5 penetrates the polymer sharply, then has a gradual expansion followed by a second penetration. This signal occurs because the probe initially has no contact with any glass beads, but upon penetration comes in contact with a glass bead. The second penetration is the result of either the bead sinking into the polymer or the probe slipping off the bead. Curves 6, 7, and 8 show no evidence of any contact with the glass beads, while Curve 2 has an entirely different appearance and no evident explanation of its behavior. A hydrogenated nitrile rubber (HNBR) from a refrigerator’s compressor lipseal was analyzed to determine whether or not the seal was degraded. DSC was chosen as the analysis technique of choice, but the signal from the small portion suspect to degradation (the outer portion of the seal) is lost in the predominant signal of the undegraded sample. Using the thermal imaging was unsuccessful due to the extreme topography of the seal; therefore, Micro-TMA as seen in Figure 19 was used for positional sampling of the undegraded and degraded regions as seen by the naked eye.
CHARACTERIZATION OF POLYMERIC MATERIALS BY THERMAL ANALYSIS, …
153
Figure 19. Micro-TMA Curves of Compressor Lipseal Five positions in each the undegraded and degraded regions were analyzed and the results compared. The TMA curves from the undegraded region expanded uniformly, but the TMA curves from the degraded region varied from slight expansion to contraction. This figure does show that the two regions are unique, but does not show that the sample contains degraded rubber. To verify whether that the proposed “undegraded” region was actually completely undegraded the seal was compared to control samples of unused rubber. Assuming that oxidation lowers the coefficient of thermal expansion (CTE) the polymer’s different morphologies can be explained. The unused rubber has the highest CTE, while the undegraded region has a slightly lower CTE, but the general shape of the TMA curves is very similar. The TMA curves of the degraded region show that the CTE is significantly lowered and in most case contracts rather than expands upon heating. A corroded stainless steel sample was analyzed using the thermal conductivity mode. Because the surface of the stainless steel sample was machined to eliminate excessive topography, the sample is suitable for the thermal conductivities to be differentiated. The results from SEX-EDX shown in Figure 20 look very much like the Micro-TA results in Figure 21.
Figure 20. SEM-EDX Surface for Corroded Stainless Steel Sample
154
NATHAN WHITELY, WEIBING XU, SEN LI AND WEI-PING PAN
Figure 21. Micro-TA Surface for Corroded Stainless Steel Sample The Micro-TA results actually show a more clear definition between the boundaries. Without the EDX function, SEM is incapable of determining whether or not the layers have a different identity or simply a higher concentration of material. The major advantage that Micro-TA possesses over SEM-EDX is the amount of sample preparation. For Micro-TA the stainless steel samples can be analyzed relatively as is, while with SEM the sample preparation is tedious and the working condition are very critical. The previous case studies show significant evidence for the importance of using multiple analytical techniques to study the same systems. Using a single technique limits the ability and increases the difficulty in successfully interpreting the data with respect to explaining the nature of the sample. Using two or more techniques ensures that all assumptions made are valid and resolves any uncertainties gathered from a single set of data. Taking advantage of multiple modes of analysis allows the systems being analyzed to be completely understood with minimal uncertainty. References 1. X. Wie, W.-P. Pan, “Thermal Characterization of Materials using Evolved Gas Analysis,” J. of Thermal Analysis and Calorimetry, 2001, 65, 669-685. 2. Wei Xie, R. Xie, W.-P. Pan, D. Hunter, B. Koene, L. Tan, R. Vaia, “Thermal Stability of Quaternary Phosphonium Modified Montmorillonites”, Chemistry of Materials 2002, 14(11), 4837-4845. 3. Wei Xie, Zongming Gao, Wei-Ping Pan, Doug Hunter, Anant Singh, Richard Vaia, “Thermal Degradation Chemistry of Alkyl Quaternary Ammonium Montmorillonite,” Chem. Mater: 2001, 13, 2979-2990. 4. Wei Xie, Brian Sisk, Wei-Ping Pan, “Micro-Thermal Analysis and Its Applications in Material Science,” NATAS Notes, 2000, 32(3), 9-15.
Energy Evaluation of Materials by Bomb Calorimetry José A. Rodríguez-Añón and Jorge Proupín-Castiñeiras Research Group TERBIPROMAT. Departamento de Física Aplicada. Facultade de Física. Universidade de Santiago. Av. J.M. Suárez Núñez, s/n. 15895 Santiago. Spain
[email protected]
1. Introductión The aim of this chapter is to introduce the user in the field of combustion bomb calorimetry. This technique is one of the oldest and most precisely studied in the field of modern physical chemistry and is the complement to some others calorimetric techniques. 2. Historical background Early studies related to combustion calorimetry are reported in the latter part of the XVIII century. Lavoisier and Laplace described, in 1784, an ice calorimeter from which heats of combustion could be determined. They studied the heat release from animal respiration.
Figure1. Lavoisier and Laplace ice calorimeter In 1788 Crawford used a similar experimental procedure by which he concluded the key role of oxygen during the combustion of organic matter in animals, and that part of the energy generated during this process is used at different levels for the animal metabolism. It was Thompson who, years later, transformed the calorimetric research by starting to study the heats of formation of different dayly used materials such as wood, oils and/or spirits. This is the moment that should be considered as the beginning of the applied calorimetry.
156
JOSÉ A. RODRÍGUEZ AND JORGE PROUPÍN
The brake of nearly 50 years suffered by Thermochemistry, as a consequence of the theories presented by Lavoisier helped to scientists to realize that heat was a form of energy. However this concept was not firmly stablished until 1840 by G. H. Hess. The probable first calorimetric bomb was used in 1848 by Andrews to determine the heats of combustion of a variety of solid, liquid and gas substances. Gases were burned in a 580 cm3 volume thin-walled copper cylinder while solid and liquid samples were placed in a platinum crucible introduced in a 4 dm3 copper cylinder which was them filled with oxygen. Ignition of the sample was achieved by passing through a platinum wire the electric current generated by a battery.
Figure 2. First calorimetric bomb designed by Andrews Favre and Silbermann introduced the term “calorie” for the unit of heat into the thermochemical publications. They studied a variety of organic compounds and published their results in 1852. They contructed their own thermometers that detected temperature differences of 0.001 ºC to 0.002 ºC. Thomsen conducted thermochemical research at Copenhagen in the period from 1851 to 1885. His studies were collected in a work entitled “Thermochemische Untersuchungen”, published from 1882 to 1888. M. P. E. Berthelot began, in 1864, his studies in thermochemistry in Paris. His studies were published in 1878 in a series of papers in which a glass flame calorimeter was described. Together with Ogier reported a technique for burning liquids in glass ampoules. In 1885, Berthelot and Vielle developed a combustion bomb technique, which was very useful for solid and low volatility liquid samples. This new method used 25 atm of oxygen. Owing to the technological progress experienced at the beginning of the XX century, it was necessary to introduce some innovations in the design and development of calorimeters trying to optimize their performance in specific studies. Kroeker, Parr and some other investigators were able to make changes such as: the use of nickel chromium alloy with suitable acid-resistant properties, the substition of a rubber gasket for one of lead, etc. In 1915, Dickinson published an article in wich procedures, apparatus and calculations in combustion calorimetry together with values for the heats of combustion of some substances were reported. Also, he stressed on the advantage of using a platinum resistance thermometer instead of a mercury in glass thermometer. W. A. Roth introduced some improvements in the calibration of his isothermaljacket bomb calorimeter. The heats of combustion of benzoic acid, naphthalene and
ENERGY EVALUATION OF MATERIALS BY BOMB CALORIMETRY
157
sucrose as well as electrical measurements were used to obtain the water equivalent. Temperature changes were observed with a Bekmann thermometer which could be read to 0.0005 ºC. He also introduced corrections for the formation of aqueous nitric acid and carbon residues. At the Third Meeting of the I.U.P.A.C. in 1922 at Lyon (France) the benzoic acid was selected as the chemical standard. The value adopted was 6324 cal (15 ºC) g-1 (air). E. W. Washburn recommended that in conjunction with certifying a value for standard benzoic acid to be used to calibrate bomb calorimeter a series of corrections known as Washburn corrections should be done. In the 1950’s, the developing of electronics allowed the design of new calorimeters with technological improvements mainly in the field of temperature detection systems. Since that, the calorimetric technique has very much improved and enlarged its application in different scientific fields such as: the search for alternative energy sources, evaluation of soils potential activity, design of pharmaceuticals, environmental studies, design of new industrial materials, etc. 3. Calorimetry Calorimetry is a technique by which heat exchange is measured either directly or indirectly, Combustion calorimetry is a technique that measure the heat exchanged. Combustion calorimetry refers to the measurement of heat (at constant pressure) or energy (at constant volume) of a reaction in which the carbon skeleton of a compound is totally broken when the compound is burnt under a gaseous oxygen atmosphere. The term reaction calorimetry applies to the measurement of the energy or heat of any reaction other than combustion. Every calorimetric experiment consists of three well defined stages: • • •
The calorimetric part which concerns the accurate determination of the energy generated in the reaction. The chemical part which concerns the characterization of the initial and final states. The transformation of the results obtained in the calorimetric experiment to a standard-state combustion energy at 298.15 K, from which a standard enthalphy of formation can be calculated.
4. Calorimeters All calorimeters are based on variations of the same basic principle: the process to be studied takes place incide the boundaries of a more or less closed area known as the actual calorimeter in controlled thermal contact with its suroundings, the jacket. The calorimeter and jacket are supplied with different auxiliary devices such as: • A thermometer that controls and measures the temperature changes resulting from a experiment. • Stirring devices that homogenize the temperature at any place time in the calorimeter. • Ignition system, to start the reaction. • Temperature controller to conduct the heat exchange between the proper calorimeter and its jacket.
158
JOSÉ A. RODRÍGUEZ AND JORGE PROUPÍN
Figure 3. Basic calorimeter sketch The calorimeters can be classified following different criteria. The arrangement of calorimeters in a classification system should be based on a simple structure suitable for practical application. There are different classification criteria, but it must not be attempted to classify every calorimeter in every detail because it leads to unclarity and insignificance in practice. Moreover, many calorimeters can be operated in diverse modes so that a calorimeter can be classified in different ways, depending on the mode of operation. For this reason we adopt different primary and secondary criteria. According to the mode of operation (temperature) the calorimeters can be classified as: • Adiabatic calorimeters. There is no heat exchange between proper calorimeter and: jacket because calorimeter and jacket temperatures are identical during the experiment. • Isoperibol calorimeters in which the jacket temperature remains practically constant during the experiment heat exchanging calorimeters. • Isothermal, in which the temperature of the proper calorimeter remains constant. • Bomb calorimeters. In this type of apparatus the reaction chamber consists of a heavy-wall container hermetically closed. The reaction is carried out under an oxygen or fluorine atmosphere and the reaction vessel can be either static or moving, thus differentiating between static or rotating bomb calorimeters. • Flame calorimeters, in which the reaction between the sample and an oxidizing agent is started by a flame.
ENERGY EVALUATION OF MATERIALS BY BOMB CALORIMETRY
159
Figure 4. Calorimetric bomb
Figure 5. Bomb calorimeter assembly used by the Research Group TERBIPROMAT
5. Theory of an isothermal experiment In a calorimetric experiment the difference in energy between two well defined states the initial and final of a combustion reaction, is determined. The process is assumed to take place inside an enclosure that is totally isolated from the surroundings, thus adiabatically (ΔQ = 0 ) , and at constant volume (ΔV = 0 ) . According to the first law of thermodynamics, ΔU = 0 , what means that U (state1 , T1 ) = U (state 2 , T2 ) , (see Fig. 6).
160
JOSÉ A. RODRÍGUEZ AND JORGE PROUPÍN
U1 (T1)
ΔU=0
U2 (T2)
-Q +Q ΔUa (TCONSTANTE)
U2 (T1)
ΔUb
Figure 6. Theory of a calorimetric experiment I This process can be modified and the system heated from T1 to T2 and then the reaction takes place isothermally at temperature T2 . (see Fig. 7). ΔU=0
U1 (T1)
U2 (T2)
+Q -Q -Q ΔUa
U1 (T2)
ΔUb (TCONSTANTE)
Figure 7. Theory of a calorimetric experiment II
6. Temperature vs. Time plot Throughout a combustion calorimetric experiment, measurements of temperature at fixed time intervals, say 15 seconds, are made. The calorimetric experiment is ordinarily divided into three periods as it is shown in Figure 8: • Initial period in which the temperature change of the calorimeter is due completely to heat exchange between calorimeter and surrounding (termal leakage) and heat of stirring. • Main period in which the most part of the temperature rise takes place as a consequence of the combustion originated in the bomb. • Final period in which the temperature change of the calorimeter is again due totally to thermal leakage and heat of stirring.
161
ENERGY EVALUATION OF MATERIALS BY BOMB CALORIMETRY
T∞ Tj
D
C
D´
Temperatura (T), ºC
T´∞ T´j
ΔTcorr
B A ti
tB
Tiempo (t), s
tc
tf
Figure 8. T-t plot corresponding to a calorimetric bomb experiment
7. Calorific value The calorific value of a fuel sample is the heat generated by complete combustion of one mass unit of sample in a oxygen atmosphere. According to the standards by the International Standards Organization (ISO) two calorific values must be considered depending on the conditions in which the combustion takes place: • The higher heating value (HHV), at constant volume, is defined as the quantify of heat generated by complete combustion of a mass unit of sample, in an oxygen atmosphere, under standard conditions. Final products after combustion are: oxygen, carbon dioxide, sulphur dioxide, nitrogen all of them in gas phase, together with water in liquid phase in equilibrium with its vapour and saturated of carbon dioxide, and a solid phase formed by ashes. • The lower heating value (LHV), at constant value, is the heat generated by complete combustion of one mass unit of sample, assuming that the water in the final products remains in the form of vapor. 8. Use of the bomb combustion calorimetry as a complement to thermal analysis in the field of R+D Bomb calorimetry is used in the field of R+S to determine calorific values of different materials. A brief summary of the research developed by our research group TERBIPROMAT in the University of Santiago de Compostela, is given: 1. Search for alternative energy sources from residue materials mainly Municipal Solid Waste (MSW) and forest waste originated from different forestry tasks carried out in Galicia (N. W. Spain). Table I shows data for the different zones of MSW originated in Vigo (Galicia). These studies were complemented by thermogravimetric analysis to study the degradation of these materials trying the possible control of their polluting elements.
162
JOSÉ A. RODRÍGUEZ AND JORGE PROUPÍN
Table 1 HHV
LHV
MOISTURE
ASH
Zone characteristic -1
-1
(kJ kg )
(kJ kg )
(%)
(%)
20400
12060
30
19
COUNTRY AREA
13600
7300
27
28
COUNTRY-INDUSTRIAL
20700
11500
31
21
RESIDENTIAL-
27000
16100
30
19
RESIDENTIAL
20600
12500
29
19
R.D.F.
19700
10500
30
22
RESIDENTIALCOMMERCIAL
INDUSTRIAL
PÉRDIDA DE AGUA COMBUS TIBLES RÁPIDOS
COMBUS TIBLE DERIVADO DE RES IDUOS (C DR)
COMBUS TIBLES LENTO S C ENIZAS
Figure 9. Degradation of a Residue Derived Fuel (RDF)
163
ENERGY EVALUATION OF MATERIALS BY BOMB CALORIMETRY
Figure 10. Use of FT-IR to detect pollutants 2. Design of risk indices to prevent and fight forest fires. This research line was used to make combined analysis of different physical, chemical, biological, and environmental parameters with the aim to obtained a numerical value capable of helping in the prevention and fight against forest fires. These numerical values are represented in the form of risk index maps.
Clase 0
Clase 1
Clase 2
Clase 3
Clase 4
Clase 5
No forestal
Figure 11. Risk index maps of the conifer species existing in Galicia 3. Study of the behaviour of different materials, with the aim of considering their possible recycling. The example refers to an epoxy resin.
164
JOSÉ A. RODRÍGUEZ AND JORGE PROUPÍN
Figure 12. Thermogravimetric study of an epoxy resin
Figure 13. FT-IR spectrum
165
ENERGY EVALUATION OF MATERIALS BY BOMB CALORIMETRY
Table 2 HHV
LHV
MOISTURE
ASH
(kJ kg-1)
(kJ kg-1)
(%)
(%)
POLYMER
32100
20200
28
7
PAPER-CARBOARD
16000
10800
21
12
TEXTILE
23200
11800
42
9
ORGANIC MATTER
13800
6100
43
39
References 1. 2. 3. 4. 5. 6.
7. 8. 9.
Partington J.R. A History of Chemistry. Vol. 3, Edit. MacMillan & Co. Londres (1962). Cardillo, P. Afinità e Calore. Origine e sviluppo della termochimica. Centro Grafico Ambrosiano s.r.l. Milán (2000). Médard, L.; Tachoire, H. Histoire de la Thermochimie. Publications de L´Universtè de Provence, Provence (1994). Kleiber, M. The fire of Life. Edit. John Wiley, New York (1961). Fuller M. Forest Fires. Edit. Wiley Nature Editions-John Wiley & Sons, Inc. New York (1991). Chandler C.; Cheney P.; Thomas P.; Trabaud L.; Williams D. Fire in Forestry. Vol. I, II, Edit. Wiley-Interscience Publication-John Wiley & Sons, Inc. New York (1983). Byram, G. M. Combustion of forest fuels, in K. P. Davis. Edit. Forest Fire Control and Use. McGraw Hill, New York (1959). May, C. A. Epoxy Resins: Chemistry and Technology. Marcel Dekker. New York (1988). Turi, E. A. Thermal Characterization of Polymeric Materials. Academic Press, Inc. California. (1891).
This page intentionally left blank
Introduction to the Viscoelastic Response in Polymers María L. Cerrada Instituto de Ciencia y Tecnología de Polímeros. Juan de la Cierva, 3. 28006 Madrid. Spain
[email protected] A polymer is made up of many (“poly” = many) repeated units (= “mer”) of a monomer (mono = one). Consequently, polymer molecules are composed of a large number of identical building blocks, labeled as macromolecules. The length of the polymeric chain is specified by the number of repeat units in the chain. This is called the 'degree of polymerization' (DP). Therefore, the molecular weight (MW) of a polymer is defined by: MW = Degree of polymerization x molecular weight of repeat unit All processes used in polymer production lead to chains of varying lengths and hence with different molecular weights. Accordingly, there is not a single molecular weight in a given polymer but a distribution of molecular weights that is more or less broad depending upon uniformity in length of chains that composed its macromolecules. Consequently, molecular weights of polymers are represented as average molecular weights. Optimum molecular weight for a polymer depends on the structure and its end use. It has to be said that polymers with very high molecular weights are difficult to process. Typically, molecular weights in industrial polymers are in the interval of 50.000-300.000. On the other hand, flexible macromolecules may adopt a large number of conformations that are determined by the position taken in the space by their atoms since this location changes by simple rotation about single bonds, as represented in Figure 1 for three carbon atoms. However, a limited number of conformations is accessible in polymers with rigid chains. Additionally, flexible polymer in the crystalline state adopt fixed conformations whereas in solution or in the molten state they exhibit a wide range of conformations.
ϕ b γ
Figure 1. Flexible macromolecules The different macromolecules composed of a large number of identical building blocks interact each to the others. These inter and intramolecular interactions provide strength that will be smaller or larger depending on the sort of interaction. There are four different types in polymeric materials, as shown in Figure 2.
168
MARÍA L. CERRADA
OH C=O O
C=O O
HO
+
HNR2 COO¯
OH OH
Van der Waals
dipolar
hydrogen bond
ionic
Figure 2. Inter and intramolecular interactions These three mentioned characteristics: a large and heterogeneous molecular weight, the possibility of adopting a great number of conformations and the existence of intra e intermolecular interactions are variables in polymers that promote their peculiar and versatile mechanical behavior which is very important from a practical point of view. Classical theories of elasticity and viscosity of a body assume steady state stress, strain and strain rate. Therefore, the time it takes to reach equilibrium conditions is not considered. Observers over one hundred of years ago found that most materials failed to reach equilibrium in an observable time period. Moreover, the constant coefficients of viscosity and elasticity were dependent on pre-treatments and loading history. Such effects, which were not accounted for the classical theories were called “elastic aftereffects”. These were more pronounced in some materials such a food dough, wet clays, pitch and unvulcanized rubbers. With the development of synthetic polymers along XX century, these effects became even more prevalent and dominated equilibrium properties. Another observation made a long time ago was that the properties of such materials are dependent upon the rate of loading or deformation rate. This feature is analogous to that of a fluid, where the stress is proportional to the strain rate. In particular, slow deformation rates made the material act like a liquid and high deformation rates made the same material act like a elastic solid. Eventually, researchers accepted the idea that all materials behave in a viscous or elastic manner that depends on how quickly or slowly they were deformed. This behavior led to a new term: viscoelasticity. In other words, the mechanical properties of materials, primarily polymers, are time-dependent and perfectly elastic deformation and perfectly viscous flow are idealizations that are approximately reached in some limited conditions. Viscoelasticity is, then, the study of the response of polymers (or other materials) which exhibit some of the features of both elastic and viscous behavior. Elastic materials deform instantaneously when a load is applied, and “remember” their original configuration, returning there instantaneously when the load is removed. In solids, the relaxation of the structure at the molecular level is extremely low and, therefore, their response is essentially elastic. On the other hand, viscous materials do not show such characteristics, but instead exhibit a time-dependent behavior. While under a constant stress, a viscous body strains at a constant rate, and when this load is removed, the material has “forgotten” its original configuration, remaining in the
169
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
deformed state. In ordinary liquids, molecular reorganization occurs very rapidly and structural memory at the molecular level is very short. The response is essentially viscous unless the frequency of the testing experiment is very high. Viscoelastic materials exhibit certain characteristics of these two behaviors as manifested in timedependent behavior, a “fading memory”, partial recovery, energy dissipation, etc. Such behavior may be linear (stress and strain are proportional) or nonlinear. Polymers are the most important viscoelastic systems. Above glass transition temperature, the response of these materials to a mechanical perturbation field involves several types of molecular motions. For instance, the rearrangement of flexible chains may be very fast on the length scale of a repeated unit. These movements imply some type of cooperativity in the conformational transitions that produce them. Cooperativity occurs even as the relaxation propagates along the chains, involving a growing number of segments of the backbone as time passes. At very long times, disentanglements of the chains takes place, and the longest relaxation time associated with this process shows a strong dependence on both the molecular weight and the molecular architecture of the system. The disentanglement process governs the flow of the system. In consequence of the complexity of the molecular responses, polymer chains exhibit a wide distribution of relaxation time that extend over several decades in the time or frequency domain. At short time or at long frequency the response is mainly elastic, whereas at long time or short frequency it is mainly viscous, as depicted in Figure 3. Obviously, the elastic component of the deformation is recoverable but the viscous component. The elastic component of the deformation is of an entropic nature and, consequently, is timedependent, as will be discussed below.
Scheme 3 viscous response
transition region
rubbery plateau terminal region
elastic response
viscoelastic response
viscoelastic response
elastic response glassy state
modulus
modulus
glassy state
transition region rubbery plateau
viscous response terminal region
frequency
temperature Figure 3. 1. Superposition principles
There are two superposition principles that are important in the theory of viscoelasticity. The first of these is the Boltzmann superposition principle, which describes the response of a material to different loading histories. This principle suggests that small changes in stress are equal to small changes in modulus multiplied by the strain. This means that the modulus is independent of the amount of material that deforms. Therefore, each loading step makes independent contribution to total loading history and the total final deformation is the sum of each contribution. Another
170
MARÍA L. CERRADA
consequence of this principle is that the deformation of a specimen is directly proportional to the applied stress when all deformations are compared at equivalent times. Embodied in this principle is the linearity of viscoelasticity.
Figure 4. The second one is the time-temperature superposition principle that establishes the correspondence between time and temperature. It has been used for a long time. Viscoelastic curves made at different temperatures are found to be superposable by horizontal shifts along a logarithmic time scale to give a unique curve covering a very large range of times or frequencies. Such curves made by superposition, using some temperature as a reference temperature, cover time outside the range easily accessible by practical experiments. Consequently, if the principle is held, from short-term experiments is feasible to predict the long-term behavior of a polymeric material. The curve made by superposition is called as mater curve (see Figure 5). The accomplishment of this time-temperature correspondence requires that there be no change in the relaxation/retardation mechanism with temperature and that the relaxation time, τ, values for all the mechanisms must change identically with temperature. A term that moves the time scale of the response function, labeled as horizontal shift factor (aT), is defined as the ratio of any relaxation time τ at some temperature T to that at reference temperature T0: aT =
τi τ i0
171
E(GPa)
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
23ºC 50ºC 80ºC
master curve
100ºC 10 -5
10 -4
10 -3
10 -2
10 -1
10 0
10 1
10 2
10 3
10 4
10 5
t (s) Figure 5. Master curve The method of relating the horizontal shifts along the log time scale to temperature changes as developed by William, Landel and Ferry is known as the WLF method. The amount of horizontal shift of the log time scale is given by log aT. If the glass transition temperature is chosen as reference temperature, the temperature dependence of the shift factor for most amorphous polymers is: log a T =
(
− C1g T − Tg
)
C 2g + T − Tg
2. Non-linear vsicoelastic response If the Boltzmann superposition principle holds, the strain is directly proportional to the stress at any given time and, similarly, the stress at any particular time is directly proportional to the strain depending upon which of these two magnitudes are the stimulus and the corresponding response. These assumptions are generally true for small stresses or strains, but the principle is not exact. If large loads are applied in creep measurements or large strains in stress relaxation ones, as can occur in practical structural applications, non-linear effects come into play. One result is that the response ε(t) or σ(t), respectively, is not longer directly proportional to the excitation (σ or ε). The distribution of retardation or relaxation times can also change, and so can aT. The problem is very complex even in cases when complications such as microcraking or phase changes (e.g. appearing of crystallinity, as in the stretching natural rubber or variation in percent of crystallinity in polyethylene) are absent. It involves unsolved problems in non-equilibrium thermodynamics, mathematical approximation and the physics of any underlying process. As a results, there is no general solution and each specific case has to be treated in a particular manner. 3. Structural parameters affecting the viscoelastic response There are several accountable inherent parameters that condition the viscoelastic behavior of polymeric materials. The primary ones are the following: • chemical structure
172
MARÍA L. CERRADA
• • • • • •
molecular architecture molecular weight and crosslinking copolymers and blends effect of plasticizers molecular orientation fillers and fibers
4. Types of experiments The analysis of the viscoelastic response is basically carried out by four different types of tests: • creep • stress relaxation • stress-strain experiments • dynamic mechanical measurements 5. Creep Creep and stress relaxation tests measure the dimensional stability of a polymer, and because the tests can be of long duration, such tests are of great practical importance. Creep measurements, especially, are of interest to engineers in any application where the polymer must sustain loads for long periods. Creep and stress relaxation measurements are also of major importance to anyone interested in the theory of or molecular origins of viscoelasticity. Creep measurements consist of loading with a constant stress and analyzing the increase of strain in time. For an ideal elastic solid, Hookean solid since its behavior is described by the Hooke equation, E=
σ ε
being E the elasticity modulus, σ the stress and ε the strain, the amount it deforms is controlled completely by its modulus. The time it takes for the process to occur is instantaneous, so time is not a variable. This strain-time dependence is shown in Figure 6. However, in an ideal liquid, Newtonian liquid since its response is described by Newton law, dε σ = η⋅ dt being τ the shear stress, η the viscosity and dε/dt the rate of shear, when a constant stress is applied it will flow as the stress is maintained. Therefore, the strain in the liquid is a function of two variables: stress and time, as seen in Figure 6.
173
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
Scheme 6
two stresses, σ1 > σ2
constant stress
strain
strain
σ1
0 time deformation of a Hookean solid
σ2
0 time deformation of a Newtonian liquid Figure 6.
For a viscoelastic polymeric material subjected to a constant stress, the strain observed is constituted by different components: initially, there is an instantaneous elastic deformation (ε1 in Figure 7), like in a Hookean solid, followed by other timedependent strain (ε2 in Figure 7) that is curved at the beginning and eventually reach a limiting slope. After removing the load, a recovery process is observed. This mechanism also consists of various contributions: an immediate elastic deformation recovery (ε1), other time-dependent one (ε2), and a permanent deformation (ε3) due to the viscous character.
Scheme 7 load applied time
ε1= immediate elastic deformation ε2= delayed elastic deformation ε3= permanent viscous deformation
ε2 ε1
strain measured
ε3
ε1
t1
time
t2
t3
Figure 7. An equation that describes the viscoelastic behavior from t1 to t2 is
ε ( t ) = ε ∞ [1 − e − t / τ ]
174
MARÍA L. CERRADA
where τ is the retardation time and ε4 is the limit of strain when t ≠ 4. On the contrary, when stress is removed, the strain that is recovered follows this other equation
ε ( t ) = ε ∞ ⋅ e − t /τ The viscoelastic function evaluated from creep experiments is the compliance, J(t), that it is defined as the ratio of the variation of strain in time, ε(t), with the constant stress applied, σ, J( t) =
ε ( t) σ
6. Stress relaxation In stress relaxation tests, the specimen is quickly deformed a given amount and the stress required to hold the deformation constant is measured as a function of time. Stress relaxation experiments are very important for a theoretical understanding of viscoelastic materials. However, such tests have not been as popular as creep measurements with experimentalists. There are probably at least two reasons for this: • stress relaxation experiments, specially on rigid materials, are more difficult to make than creep tests • creep tests are generally more useful to engineers and designers. For an ideally elastic material, the stress necessary to keep ε constant will remain constant for a t > 0. However, for a model viscous liquid, the stress will be instantaneously infinite at t = 0 and then zero for t > 0, as represented in Figure 8.
Scheme 8
stress
stress
constant strain
0
0
time
response of a Hookean solid in stress relaxation experiments
time
response of a Newtonian liquid in stress relaxation experiments
Figure 8. However, for a viscoelastic material the stress will decrease slowly with time. A typical stress relaxation curve is shown in Figure 9. The mathematical representation of the stress relaxation curve is
σ ( t ) = σ 0 ⋅ e − t /τ
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
175
where σ0 is the initial stress. The viscoelastic function obtained is the stress relaxation modulus, E (t), that is defined as
E( t ) =
σ ( t) = E 0 ⋅ e − t /τ ε
In the linear viscoelastic regime, creep compliance function is a time-dependent reciprocal modulus.
Scheme 9 strain applied
stress measured, σ (t)
ε0
time
time Figure 9.
7. Stress-strain experiments This type of test measures the response (strain) of a sample subjected to a force that varies with time at constant rate. It is widely used, resulting a very important practical measurement. However, the relationship of this test to use applications is not as clear as is generally assumed. Because of the viscoelastic nature of polymers with their sensitivity to many factors, the stress-strain test is, at best, only a rough guide to how a polymer will behave in a finished object. To give the engineer or designer a complete and realistic information, tests at many temperatures, rates of testing and other conditions are required. Consequently, it needs much time and material. The shape of stress-strain curves is used to define brittle and ductile behavior. Since the mechanical properties of polymers depend on both temperature and observation time, the shape of the stress-strain curves deeply changes with the strain rate and temperature for a given polymer, as seen in Figure 10. The curves represented in Figure 10a for hard and brittle polymers show that the stress increases more or less linearly with the strain. This behavior is characteristic of amorphous and semicrystalline polymers well below the glass transition temperature, Tg. These materials fails at low strains (usually lower than 10 %) leading to a brittle fracture. The curve in Figure 10b describes polymers showing a ductile behavior that yields before failure. The most ductile polymers undergo necking and cold drawing failing at higher strains (around 250-400 %). Semicrystalline polymers are typical examples that display this behavior at temperatures intermediate between glass transition and melting. The curves depicted in Figure 10c are characteristic of elastomers or amorphous polymers above their Tg, showing an elastomeric behavior reaching very high deformations before breaking. The deformation takes places homogeneously in these materials or at these temperatures.
176
MARÍA L. CERRADA
brittle
ductile
elastomeric
90
c σ
σ (MPa)
a 0 0
b 10
0
ε
ε (%)
250
0
1000
Figure 10.
Esfuerzo Nominal
σN (MPa)
A
E X σY
B
A' εY
C
D
ε (%) Figure 11.
Figure 11 shows a stress-strain curve corresponding to a tensile test for ductile polymers. Nominal stress (load divided by initial cross-sectional area of the strip) is plotted against strain. In the bottom part of the Figure 11, the change in the cross section of a specimen is seen. It has been observed for many years that some thermoplastics can be deformed at room temperature as a result of cold drawing. This is a remarkable phenomenon, in which the plastic deformation is concentrated in a small region of the specimen. The behavior at low strains (from A to Aƍ) is homogeneous and the stress rises steadily with increasing strain. Therefore, the relationship between these two magnitudes is linear since this region is that where Hooke’s law is obeyed and the polymer might recovers the original shape if the stress is removed (linear elastic or viscoelastic behavior, i.e., instantaneously or temporally delayed). At B the sample thins to a smaller cross-section at some point with the formation of the neck. A maximum point is reached, that is called yield point, and characterized by its two coordinates:
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
177
yield strain and yield stress (εY, σY). In general, the yield point indicates the beginning of the plastic deformation. Further extension occurs by the movement of this neck through the specimen as it progressively thins from its initial state to the final drawn state. A decrease of the nominal stress is observed at strains higher than εY until the value corresponding to point C. In the region CD, the material is deformed without any apparent change of the nominal stress, giving rise to the phenomenon called cold drawing. Starting from point D, stress begins again to considerably go up indicating that the material becomes rigid. This process is labeled as strain hardening. After that, fracture of the material occurs at point X. Information about three important mechanical parameters can be obtained from this type of test: stiffness, strength and toughness. The stiffness is evaluated by the elasticity modulus, E, that is the slope in the initial linear region (AAƍ). The mechanical strength is related to the highest stress that material can bear before its breaking. Its value is usually given by the breaking stress. The concept of toughness might be defined in several ways, one of which is in terms of the area under the stress-strain curve. It is, therefore, an indication of the energy that a material can store before its rupture. Dynamic mechanical measurements The transient experiments referred above provide information on the viscoelastic behavior of materials in the time-domain for values of t larger than 0.1 seconds. However, it is often necessary to obtain the responses of viscoelastic materials to perturbation force fields at very short times. For instance, it is important to know how the storage and loss viscoelastic functions change with the frequency of the perturbation when materials are used as acoustic isolators in buildings, or to eliminate noise in vibrating metallic sheets by depositing layer of viscoelastic materials on them, etc. Information of this kind can be obtained by studying the responses of materials to dynamic perturbation fields. Moreover, taking into consideration that an experiment carried out at a frequency ω is qualitatively equivalent to others performed in the time domain t = ω-1, the combination of transient and dynamic experiments provides information on the viscoelastic behavior of materials in a wide range of time scale covering several decades. The information thus achieved is important not only on practical grounds but also from a basic point of view. Actually, the knowledge of viscoelastic responses over a wide time scale is important to the analysis of the molecular motions responsible for the viscoelastic behavior of materials. In the Dynamic Mechanical Thermal Analysis, DMTA, when a polymer reaches the temperature or frequency range at which a chain movement occurs, the energy dissipated increases up to a maximum. Thus, dynamic mechanical analysis allows to study these maxima, not only the main one, related to the glass transition, but also local movements not detected by other techniques. When a perfectly elastic solid undergoes a sinusoidal deformation, the corresponding stress is in phase with the strain. On the contrary, in a completely viscous fluid the phase angle will be ninety degrees. In a viscoelastic solid, as polymers are, the phase lag is in between these two extreme archetypes, usually up to 20 degrees, depending on the temperature range studied. Therefore, the stress wave is delayed with respect to the strain one and its decomposition leads to a component in phase and another one out of phase (see Figure 12).
178
MARÍA L. CERRADA
Figure 12. These two components allow obtaining the two parts of the complex modulus (shear, bending, compression or tensile modulus, depending on the geometry of the sample clamping). These two contributions are named storage and loss modulus, Eƍ and EƎ respectively, related to the part of energy absorbed for the next cycle of the wave and that one dissipated, respectively. The ratio of loss modulus to storage modulus is the loss tangent, connected with the mechanical damping, ), by the formula: tan δ =
E ′′ Δ ≅ E ′′ π
which is almost exact for low values of loss tangent, as occurs for polymers. Thus, the close relation between mechanical damping and loss tangent is seen. When a polymer presents any movement the corresponding relaxation is depicted as a maximum in the loss tangent and loss modulus while the storage modulus displays an abrupt change. Depending on the variable considered, frequency or temperature, the characteristics related with a single relaxation are shown in this Figure 13, which has mirror symmetry.
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
E'
E'
E''
E''
tan δ
tan δ
log ω
ωτ=1
ωτ=1
179
T
Figure 13. Therefore, the variation of these viscoelastic parameters can be studied as a function of either frequency or temperature. In the first case, the real part (storage modulus) exhibits a strong increase in the relaxation zone, where the imaginary component (loss modulus) will show a maximum, observable equally in the values of tan ∗ to slightly smaller frequencies. In a similar fashion, the variation of moduli and of tan ∗ as a function of the temperature is proved at a given working frequency by a storage modulus diminution upon increasing temperatures, this decrease being the most accused in the relaxation zones at which the loss modulus presents maxima of variable intensity. These maxima also appear upon plotting the variation of tan ∗ as a function of temperature, though at temperatures higher than in the case of the loss modulus maxima. The location of a relaxation is dependent on frequency, as shown in Figure 14 on the magnitude plotted, EƎ, for the two glass transitions found in graft copolymers of poly(tert-butyl acrylate-g-styrene). Therefore, it is needed to choose standard conditions for expressing results. It is mostly used the loss modulus vs temperature plot at 1 Hz of frequency. This representation for the relaxation associated to the glass transition leads to a value some degrees higher than the glass transition temperature measured by differential scanning calorimetry, DSC. Even though the Tg location can be easier measured by DSC, in some cases it is more easily found by means of DMTA. Moreover, dynamic mechanical analysis shows other chain movements and it can be also used for studying local motions.
180
MARÍA L. CERRADA
Scheme 14 200 150
E'' (MPa)
30 Hz 10 Hz 3 Hz 1 Hz
αPtBA αPS
100 50 00
50
100
150
200
T (ºC) Figure 14. The temperature dependence of molecular mobility is characterized by various relaxation process in which a certain mode of chain motion set in (or freezes in with decreasing temperatures). The most important mechanism in amorphous polymers or amorphous regions in semicrystalline polymers is the glass transition at the temperature of which, Tg, the micro-Brownian motion of segments of the main chain becomes active. The length of these segments is inversely proportional to the flexibility of the main chain, in common polymers it is estimated to be several tens of C-C bonds. The glass transition has received much attention in polymer physics because it is accompanied by significant changes in the mechanical (modulus of elasticity decreases by three of four order of magnitude) and other physical properties of the sample. All of them are important also with respect to its applications. The rest of relaxation processes that appear in glassy polymers are called secondary relaxations. Since they are associated with the motions of short segments of main chains or with the motions of parts or the whole of the side chains, they are accompanied by much smaller changes in the physical quantities than those exhibited at around the glass transition. Until now, secondary processes have received relatively little attention, probably also because of their less practical importance so that the understanding of the molecular mechanism involved is still incomplete and the description of the observed phenomena is semiquantitative at best. Secondary relaxations are closely associated with limited molecular mobility, i.e., with the rotational and vibrational motions of relatively short chain sections. The motional units may be identified with sequences of the main chains consisting of four to six groups or with side chains and their parts. Generally, it is believed that such a motional unit may assume several stable conformations separated from each other by potential barriers. The frequency of the jumps over a potential barrier is inversely proportional to its height and proportional to the absolute temperature. Therefore, these type of relaxations are well-described by an Arrhenius expression f = f0 ⋅ e
− ΔH RT
181
INTRODUCTION TO THE VISCOELASTIC RESPONSE IN POLYMERS
where f is the working frequency T is the absolute temperature at which the maximum of a relaxation occurs and )H is the activation energy of the relaxation process. However, this expression is not fulfilled in the case of the relaxation associated to glass transition, but the process is adjusted to the equation of Williams, Landel and Ferry. Consequently, the plots ln f vs T-1 are straight lines only for the secondary relaxations. Relaxation associated to the glass transition do not follow this dependence excepting for measurements performed in small intervals of low frequencies, in whose case straight lines (with high slope) are also obtained for the glass transition. The activation energy, usually named with the adjective "apparent", obtained from the slope of those plots (called relaxation maps), measures the easiness in the cooperativity of the movement. Therefore, the glass transition usually has the highest apparent activation energy (around 400 kJmol-1) and other local movements (parts of the chain, lateral chains, bulky groups) have activation energies as low as 30 kJmol-1. This fact can be seen in Figure 15, which also shows the merging of alpha and beta relaxations at frequencies around 1013 Hz. The Arrhenius plot for the relaxation related to the glass transition is not a straight line in the whole temperature range but a curve, as aforementioned. However, at low frequencies (those of dynamic mechanical analysis, circles in Figure 15) the line is almost straight and has a high slope. The squares in the plot are results from dielectric measurements. The dielectric analysis has different physical basis but gives a similar information on macromolecular movements, referred obviously to polymers with dipoles in the chain, and allows working at frequencies much higher than the mechanical ones. However, dielectric analysis hardly detects movements in chains with weak dipoles, as polyolefins. Therefore, even though dynamic mechanical analysis cannot be accomplished in a frequency interval so wide as the dielectric one, DMTA results obtained at several frequencies in an interval of only two or three decades and at a heating rate sufficiently low, provide a very complete information of the molecular dynamics in the polymer, including the activation energy of the different relaxations. 200 100 50
T ( ºC) -50
0
-100
-125
10 β
8 log f (Hz) 6
α
4 2 0 2
3
4
5 103/T (K -1)
Figure 15.
6
7
182
MARÍA L. CERRADA
Molecular motions underlying secondary relaxation process are a function of the constitution and structure of the polymer, but it is possible to find groups of polymers, usually of a similar composition, which exhibit and analogous relaxation (or relaxations) characterized by similar values of temperature location, activation energy, relaxation strength, etc. Molecular motion which give rise to secondary relaxation processes above the liquid nitrogen temperature have been tentatively divided into: • Local main chain, A in the Figure 16 • Side chain rotations about the bonds linking side chains to the main chain, B in the Figure 16 • Internal motions within the side chain, , C in the Figure 16 • Motions occurring within the crystalline regions • Diluent-induced secondary relaxations, , D in the Figure 16
A B
C O
O
Diluent
R C
D
Figure 16. References 1. 2. 3. 4.
5. 6. 7.
Structure and Properties of Oriented Polymers. Ward IM, editor London: Appl. Sci. Pub., 1975 Ward IM. Mechanical Properties of Solids Polymers, 2nd edition. Chichester: John Wiley and Sons, 1990. McCrum NG, Read BE, Williams G. Anelastic and Dielectric Effects in Solid Polymers, New York: Dover, 1991. Materials Science and Technology. Vol. 12: Structure and Properties of Polymers. Cahn RW, Haasen P, Kramer EJ, editors. Weinhein: VCH, 1993. Nielsen LE, Landel RF; Mechanical Properties of Polymers and Composites, 2nd Edition. New York: Marcel Dekker Inc., 1994. Rohn ChL. Analytical Polymer Rheology: Structure, Processing Property Relationships. Munich: Carl Hanser Verlag, 1995. Riande E, Díaz-Calleja R, Prolongo MG, Masegosa RM, Salom C; Polymer Viscoelasticity, New York: Marcel Dekker, 2000.
Fundamentals of DMA Ramón Artiaga and Ana García Departamento de Ingeniería Industrial II. Escola Politécnica Superior. Universidade da Coruña Mendizábal s/n, 15403 Ferrol, Spain
[email protected] 1. Definitions 1.1.
Rheology
The term Rheology was coined by Professor Bingham of Lafayette College, Indiana. Professor Bingham described it as the science of flow and deformation of materials [1], a definition accepted by the American Society of Rheology in 1929. Howard Barnes said that the flow was a deformation, at least part of which was nonrecoverable [2]. William W. Graessley considered deformation a change in shape [3]. For Russell R. Ulbrich, Rheology was the study of stress-deformation relationships [4], allowing one to analyze the constitutive equations linking the stresses and deformations in materials [5-7]. According to N. W. Tschoegl, a constitutive equation or rheological equation of state finds a relationship between a dynamic quantity, stress, and a kinematic quantity, strain, through one or more parameters or functions representing the characteristic response of the material per unit volume, regardless of size or shape [8]. 1.2.
DMA
Dynamic Mechanical Analysis (DMA) studies the behavior of materials which are subjected to a dynamic or steady deformation. It looks at how they respond to an imposed stress. Stress deforms the materials and DMA measures the strain, calculating how much energy is stored or dissipated during the process. Although gels and some viscous samples can be tested, a DMA instrument is specially adapted to the rheological study of solid-like materials. DMA is also considered a thermal analysis technique. Because temperature can be controlled during a test, thermal properties, such as glass transition, can be studied by means of DMA. 1.3.
Flow and deformation parameters
Since any rheological study deals with deformation and stress, the following terms will be used consistently: Stress: the force deforming the sample per unit area. Its units in SI are Pa.
τ (or) σ = F/A Strain: the distance a sample moves in response relative to the sample length. It has no units. In shear: γ= Δ X/ Δ Y In tensile: ε
= Δ L/Lo
184
RAMÓN ARTIAGA AND ANA GARCÍA
Strain rate: change of shear strain per unit time. It is represented by γ in shear, and by ε in tensile. Its units are s-1. 2. Ideal and Real Behaviors The range of a material’s rheological behavior falls between the two classical extremes: ideal solid and ideal fluid. They are described by Hooke and Newton’s laws, respectively. 2.1.
Hooke’s Law
Describes the behavior of an ideal elastic solid, relating the applied strain to the resultant stress (or vice versa). The proportionality factor is called the material’s modulus and is denoted as E or G. Young’s Modulus is the ratio of dynamic stress to strain, E* (measured in tensile or bending mode)
E= σ/ε Shear Modulus is the ratio of dynamic shear stress to strain, G*
G= τ/γ for most rubbery polymers: E=3G, assuming a Poisson ratio of 1/2 (Poisson’s ratio is the linear contraction relative to the extension in tensile). 2.2.
Newton’s Law
Describes the behavior of ideal fluids according to stress and shear. The proportionality factor is viscosity, Ș. Ideal viscous fluids are linear in terms of shear rate, not shear thinning.
τ = η d γ /d t = η γ
In order to distinguish Newtonian and non-Newtonian behaviors, it is important to bear in mind that the shear viscosity of Newtonian fluids does not vary with shear rate and is constant with time of shearing. 2.3.
Actual Materials: linear and non-linear regions
Most materials obey these laws over a limited range of stresses. Beyond this range, they show non-linear behavior. This unit will provide examples of how materials actually behave. Figure 1 shows the strain stress plot obtained from a wire made from a nickel-titanium alloy subjected to DMA. The behavior is similar to that of an ideal solid until breaking point. One finds in Figure 2 the typical strain-stress plot obtained from a thermoplastic. In this case, after the linear region, there is a wide range of non linear behavior, where creep is involved. Most polymer melts reveal a viscosity dependence with shear rate as shown in Figure 3, with a linear region for a low shear rate and shear thinning for higher shear rates.
185
FUNDAMENTALS OF DMA
1.2x10
8
1x10
7
stress(t) ( [Pa]
)
8x10
8
6x10
4x10
2x10
7
7
7
0.0
0.0
0.1
0.2
0.3
0.4
0.5
strain(t) [% ]
Figure 1. Strain rate test of a nickel-titanium alloy
Stress at Failure
stress
Yield Stress
Drawing Stress
Elongation at Yield
Ultimate Elongation
strain Figure 2. Typical plot of a strain rate test for a thermoplastic polymer.
0.6
186
RAMÓN ARTIAGA AND ANA GARCÍA
Newtonian Region
Non-Newtonian Region
η = const
τ η = f ( γ )
τ γ
Figure 3. Typical stress-shear stress plot of a fluid showing linear and non-linear regions. 2.4.
Non-Newtonian time independent liquids
Liquid viscosity is dependent on the shear rate, but independent of the time of shearing. The following cases are possible: • Pseudoplasticity: viscosity decreases while the shear rate increases; this is also known as shear thinning. • Dilatancy or shear thickening: both viscosity and shear rate increase. • Bingham Fluids: there is no deformation below a yield stress. Above the yield stress the behavior may be Newtonian or non-Newtonian. Figure 4 summarizes Newtonian and non-Newtonian time independent flow types. Bingham
Shear stress, ı
Bingham Plastic
Pseudoplastic (shear thinning) Newtonian
Dilatant (shear thickening)
Shear rate, γ Figure 4. Newtonian and non-Newtonian time independent flow types.
187
FUNDAMENTALS OF DMA
2.5.
Non-Newtonian time dependent fluids.
The viscosity of a fluid is dependent on the shear rate and the time during which the shear rate is applied. There are two typical cases: • Tixotropy: apparent viscosity decreases time under constant shear rate or shear stress. This is followed by a gradual recovery when the stress or shear rate is removed. • Reopexy: an increase in apparent viscosity appears with time under constant shear rate or shear stress. There is a gradual recovery when the stress or shear rate is removed.
η
τ
γ
γ
η Ȗ= cte
Ȗ= 0
t Figure 5. Idealized plot showing viscosity and a dependence on shear rate and time in a tixotropic fluid. 3. Viscoelasticity 3.1.
Phenomena where viscoelasticity is involved
Viscoelasticity is the word used to describe material behaviors found among ideal solids and liquids. Examples in which viscosity effects are very evident include:
188
RAMÓN ARTIAGA AND ANA GARCÍA
• • • • •
Rate dependent behavior in solids. Solids in general behave with greater stiffness at high deformation rates. “Silly putty” is a silicone in which a dramatic change of stiffness occurs at deformation rates that fall under normal use. Rod climbing in liquids. Newtonian and viscoelastic behaviors of a liquid subjected to stirring action are depicted in Figure 6. Viscoelastic liquids tend to climb the stirring rod. Die swell in the extrusion of thermoplastics (Figure 7). Plastic “memory” in the injection molding of thermoplastics. It is well known that if the mold is open when the temperature of a part is still high, the part tends to twist remembering the movement imposed by the screw in the extruder. Melt fracture. One of the most frequent problems in extrusion, it is related to the deformation rate when a material passes through a nozzle. It is represented in Figure 8.
Newtonian
Viscoelastic
Figure 6. Newtonian and viscoelastic liquids under stirring action
Extruder Die
Figure 7. Die swell in extrusion
Figure 8. Melt fracture
189
FUNDAMENTALS OF DMA
3.2.
Viscoelasticity and Time Dependence
Nothing behaves like an ideal solid or an ideal liquid. This fact seems to be reinforced by sayings like “everything flows if you wait long enough”. Actually, apart from the case of silly putty, mentioned above, there are many examples in nature and real life. Ice is considered a solid, but it is well known that glaciers have flowed over millenniums. Another case in point is how old stained glass windows in European cathedrals is reported to be thicker at the bottom. This phenomenon is attributed to the slow flow of the glass while in place. Although water is a liquid, an impact at high speed against water may cause damage because, with high speed deformation, water behaves more like a solid. The time scale is critical, with solid-like behavior being favored by short time scales, and liquid-like behavior by longer time scales. 3.3.
Viscoelastic Characterization
Although another section offers a more detailed description of test types, three methods of viscoelastic characterization may be mentioned here: • Creep: stress remains constant while the strain is recorded as a function of time. The most accurate instrument for this kind of test in solids is a TMA (thermo mechanical analyzer). DMA also works in TMA mode. • Stress relaxation: the strain is held constant while the stress is recorded as a function of time. • Dynamic mechanical analysis: the sample is subjected to sinusoidal stress and the stress is recorded as a function of time. These kinds of tests are normally performed with DMA instruments, although they may be done with other rheometers. 3.4.
Time-Temperature Superposition
Log G(t)
Log Moduli
Williams, Landell and Ferry [9] observed relationships between time and temperature in the mechanical properties of many polymers. They empirically obtained an equation that made it possible to shift the data away from the experimental range. Later on, a theoretical basis and other models were developed. • Relationship between time and temperature A short time is equivalent to a low temperature. Figure 9 shows how the modulus changes with time and temperature in a typical polymer.
Log Time
G’
G’’ Log Temperature
Figure 9. Modulus plotted against time and temperature in polymers.
190
RAMÓN ARTIAGA AND ANA GARCÍA
Log Moduli
Log Moduli
• Relationship between frequency and temperature The effect of a high temperature is similar to a low frequency or deformation rate. Figure 10 shows the modulus variation along with temperature and frequency.
G’ G’’
Log Temperature
G’
G’’ Log Frequency
Figure 10. Modulus variation with temperature and frequency in polymers. • Time and temperature superpositioning Since data taken at higher temperatures represent data taken at lower frequencies (long times), and data taken at lower temperatures represent the behavior at high frequencies (short times), the data can be shifted horizontally to create a master curve that describes the behavior beyond the experimental range. Figures 11 and 12 show, respectively, the storage modulus variation with frequency at different temperatures and the master curve constructed over a wider range of frequencies. 10
6
105
104
)
130°C 140°C 150°C 160°C 170°C 180°C 190°C 200°C 210°C 220°C 230°C 240°C 250°C
G' ( [Pa]
103
102
101
100 10-1
100
101
102
103
Freq [Hz]
Figure 11. Overlay of storage modulus-frequency plots obtained at different temperatures.
191
FUNDAMENTALS OF DMA
10
6
105
)
104
G' ( [Pa]
103
102
101
100 10-3
10-2
10 -1
100
101
102
103
104
105
106
Freq [Hz]
Figure 12. Master curve obtained by shifting the data obtained at different temperatures. 4. Principle of DMA As seen below, a DMA apparatus may be used for working in several modes. However, a stress controlled instrument has been devised for working in dynamic mode. The stress controlled dynamic mode consists of measuring the strain in the sample, while applying a controlled sinusoidal or waveform stress. It is also possible to work with strain control. In this case, a controlled sinusoidal strain is applied while the stress response is measured.
192
RAMÓN ARTIAGA AND ANA GARCÍA
Viscoelastic Material
į Ȗ(t) Deformation ı (t) Stress
t
Ideal Solid: δ= 0º Ideal Liquid: δ=90º Viscoelastic Material: 0 δ 90º
Stress ı (t) elastic ı’
t
viscous ı’’
Figure 13. Above: stress and strain variations with time in a dynamic experiment. Below: separation of the elastic and viscous components of stress.
ε = ε 0 sin (wt ) σ = σ 0 sin (wt + δ )
sin (a + b ) = sin (a ) cos(b ) + cos(a )sin (b )
σ = σ 0 sin (wt ) cos(δ ) + σ 0 cos(wt )sin (δ ) Define:
σ ' = σ 0 cos(δ ) In phase component σ ' ' = σ 0 sin (δ ) Out of phase component
FUNDAMENTALS OF DMA
193
Complex Stress: σ * = σ '+iσ ' ' = σ 0 cos(δ ) + iσ 0 sin (δ ) What is Complex Strain? ε = ε 0 sin (wt ) ε'= ε0
ε ''= 0 because the time is defined as zero based on ε. Thus, the Complex Strain is: ε * = ε '+iε ' ' = ε 0 What is the Complex Modulus? Starting from Hooke’s Law: σ = Eε that, in its complex form, it is: σ * = E *ε * Then, E * = σ * / ε * = (σ '+iσ ' ') / (ε '+iε ' ') E * = (σ 0 cos(δ ) + iσ 0 sin (δ )) / ε 0
E * = (σ 0 / ε 0 ) cos(δ ) + i(σ 0 / ε 0 )sin (δ ) E * = E '+iE ' ' E ' = (σ 0 / ε 0 ) cos(δ ) E ' ' = (σ 0 / ε 0 )sin (δ ) where E’ is the storage modulus that represents recoverable energy, that is, solid like behavior. E’’, the loss modulus, represents dissipated energy or liquid like behavior. For an ideal solid, δ=0, then E ' = (σ 0 / ε 0 ) and E' ' = 0 For an ideal liquid, E '= 0 and E ' ' = (σ 0 / ε 0 ) The magnitude of the Complex Modulus
(
E * = ( E ' ) + ( E ' ') 2
)
2 1/ 2
= (σ 0 / ε 0 )(cos 2 (δ ) + sin 2 (δ ))
E* = σ 0 / ε 0 The tan δ The tan δ, also called the loss factor and the loss tangent, is related to the storage and loss modulus as follows E * = E '+iE ' ' E ' = (σ 0 / ε 0 ) cos(δ ) E ' ' = (σ 0 / ε 0 )sin (δ ) E ' ' / E ' = sin (δ ) / cos(δ ) = tan (δ ) For an Ideal Solid: tan (δ ) = 0 For an Ideal Liquid: tan (δ ) = ∞
194
RAMÓN ARTIAGA AND ANA GARCÍA
5. Geometries
The actuator in DMTA is a drive shaft that moves in a linear way, forward and backwards. Some fixtures were designed to permit testing different samples in different ways. With single and double cantilever bending, this geometry was devised to test solid bars. For instance, laminates can be directly tested in this manner. Thin layers can also be tested on a solid bar support. Figure 14 shows a single cantilever assembly. The three point flexural test, considered the most suitable for very stiff materials, can be found in Figure 15. The boundary effect of the clamps, often the case with cantilever geometries, is avoided. This test also works well with materials that expand significantly with temperature. Cylindrical and rectangular tensile geometry is ideal for samples that require a small force for deformation, such as films, fibers and elastomers. Figure 16 shows a rectangular sample assembled to the fixtures. Compression fixtures, seen in Figure 17, are appropriate for resilience evaluation in foams and gels. The next type is the shear sandwich. Although DMA was developed to test more or less solid like materials, this geometry can be used to test pastes, gels, melts and viscous fluids. Care must be taken to prevent the sample from leaking on the drive shaft. Normally, a horizontal position is preferred for this kind of test, as shown in Figure 18. Elastomers can also be tested with this geometry.
Figure 14. Single cantilever geometry
FUNDAMENTALS OF DMA
Figure 15. Three point flexion geometry
Figure 16. Rectangular tensile geometry
195
196
RAMÓN ARTIAGA AND ANA GARCÍA
Figure 17. Compression geometry
Figure 18. Shear geometry
FUNDAMENTALS OF DMA
197
6. Modes of operation and variables
Although in DMA, instruments are built with internal stress control, in practice they can work under both stress and strain control. In either case, the instrument can be operated in dynamic and stationary-transient modes. The tests are usually classified according to the type of control and mode of operation. These can be summarized as follows:
•
Transient – Static load: creep and TMA mode – Constant strain: stress relaxation – Strain rate testing: stress- strain curves
•
Dynamic – Single point: to set parameters – Time sweep at constant frequency and strain (or stress) – Dynamic strain sweep – Frequency sweep – Temperature sweep – Combinations of frequency and temperature sweep The main variables involved in a strain controlled DMA test are – Deformation or strain – Rate or frequency of deformation – Temperature ramp or step isothermal – Time – Stress, which is the response, since it is a strain controlled experiment In a stress controlled experiment, strain is the measured response.
Figures 19 to 25 illustrate the evolution of the controlled variables and responses in typical operational modes. Figure 19 shows how the strain amplitude varies in a dynamic strain sweep test. It is normal to find a region of strain amplitude where the storage modulus is constant or varies linearly, followed by a non-linear region where the storage modulus decreases. The normal evolution of moduli with frequency is plotted in Figure 20: the higher the frequency, the higher the modulus. Viscosity decreases, an outcome consistent with the pseudoplastic behavior of most polymers. On the left-hand side of Figure 21 there are two possible temperature profiles in a temperature controlled experiment. The evolution of moduli and tan δ is plotted on the right, showing a peak in tan δ that corresponds with the glass transition. Many amorphous polymers decrease by three decades in the storage modulus of the glass transition region. Figures 22 to 24 represent the responses of ideal solids (elastic), ideal liquids (viscous) and polymers (viscoelastic) to step changes in: stress deformation, strain deformation and rate deformation, respectively. Earlier, a strain-stress curve obtained from a nickel-titanium wire in a strain rate test was presented in Figure 1. Its behavior, almost linear until breaking, is very different from the one obtained with thermoplastics. Figure 25 shows the temperature profile and response of a memory shape alloy under constant stresses of different magnitude.
198
Log Modulus
Commanded Strain
RAMÓN ARTIAGA AND ANA GARCÍA
G’
Strain (γ)
Commanded Strain
Time
G’’
η
G’
Log Viscosity
Log Modulus
Figure 19. Commanded strain and typical storage modulus evolution of a polymer under dynamic strain sweep testing.
Log Frequency
Figure 20. Frequency dependence of moduli and viscosity in a frequency sweep test.
Log Moduli
Temperature
tanδ
G’’ Time
Log Loss Tangent
G’
Temperature
Figure 21. Step and ramp profiles of temperature on the left. Temperature dependence of the moduli on the right.
199
FUNDAMENTALS OF DMA
τ
γ t2
t1
Elastic Response t2
t1
τ
Viscoelastic Response
γ t2
t1
t2
t1
Viscous Response
γ
τ t1
t2
t1
t2
Figure 22. Material response to a step change in stress deformation (Creep Testing)
γ
γ t1
t1
Elastic
Viscous
Viscoelastic Responses
τ
τ
τ
t1
t1
t1
Figure 23. Material response to a step change in strain deformation (Stress relaxation testing)
200
RAMÓN ARTIAGA AND ANA GARCÍA
γ
γ
Viscoelastic Response
t1 τ τ t1
t2
t1 t2 Viscous
t2 Elastic τ
t1
t2
t1
t2
Figure 24. Material response to a step change in strain rate deformation (steady testing)
7.0
130.0 strain(t) Constant stress of 2.3 e08 Pa Constant stress of 8.0 e06 Pa
6.0
120.0 110.0
5.0 100.0 4.0
3.0
80.0
2.0
70.0
Temp ( [°C] )
strain(t) ( [%]
)
90.0
60.0 1.0 50.0 0.0 40.0 -1.0
-2.0 0.0
30.0
500.0
1000.0
1500.0
2000.0
2500.0
20.0 3000.0
time [s]
Figure 25. Strain plots at two different constant stresses, with a ramp-iso-ramp temperature profile, for a nickel-titanium alloy.
FUNDAMENTALS OF DMA
201
7. Choosing the magnitude for the controlled parameters
Sometimes, previously gathered information may help one choose the experimental conditions for analyzing a new material. In any case, the experimental setup should take into account what parameters have to be known about the material and what the physical form of that material is. When considering the test of a new sample in DMA, many questions may arise, such as which geometry or mode of operation is most suitable. In general, these questions can be answered by following the criteria indicated in the previous paragraphs and remembering the limitations of the instrument. For example, for testing a gel, shear sandwich is possible in principle. An uncured thermosetting resin can be supported on a wire mesh and tested in double cantilever mode. A very stiff metal bar is not appropriate for testing in tensile mode because the force needed to deform it is higher than the maximum force allowed by the instrument. On the other hand, a very thin wire made of the same metal can be tested in tensile mode. Once the geometry and operational mode have been chosen, assuming that it is clear which parameters are of interest, it is necessary to set up the magnitude of the controlled parameters. This is especially true with dynamic experiments where one parameter is kept constant. To illustrate the problem, consider the case of a dynamic strain controlled temperature ramp test. The range of temperature is of interest to the study and is constrained by the instrument’s highest limit, as well as the sample’s features, such as melting point. The other two magnitudes to set up are the frequency and strain amplitude. It is a good idea to perform two quick tests at room temperature to choose these magnitudes: a dynamic strain and frequency sweep. • Dynamic strain sweep test This test is designed to find the regions of the material’s linear behavior. The range of strain amplitude for this test can be as wide as the instrument allows. Logarithmic variation is less time consuming. A provisional value for the frequency has to be chosen. In general, one Hertz is recommended. • Frequency sweep test Looking at the dynamic strain sweep test result, a strain amplitude value has to be chosen from the linear region. Again, the range for the frequency variation is limited by the instrument and logarithmic variation is possible. In general, frequencies lower than 0.1 Hertz are not recommended since they are more time consuming and do not hold any practical advantage. Considering the results obtained in both preliminary tests, a combination of frequency and strain should be selected, making sure that this combination falls within the linear regions of both tests. Otherwise, additional preliminary tests should be performed until this requirement is fulfilled. One Hertz is recommended, whenever possible, because a lot of work has been accomplished using this frequency and, therefore, comparison is possible.
202
RAMÓN ARTIAGA AND ANA GARCÍA
8. Interpreting Phenomena
Polymers may show several relaxation transitions along the temperature ranges that can be detected by DMA. These relaxation phenomena are characterized by a decreasing step in the storage modulus and a peak within the tan δ. All of these are frequency dependent. Figure 26 provides an idealized plot of the storage modulus and loss tangent for an amorphous polymer. Figure 27 reveals the effect of an increase in the frequency, shifting the transitions to higher temperatures. The transitions are normally denoted by Greek symbols, starting from α, for the transition closest to the apparent melting process. The α transition corresponds with glass transition in amorphous polymers. In the case of some crystalline polymers, it is not clear which transition corresponds with glass transition, since certain relaxation phenomena involving chain segments or slippage between crystallites may happen inside the crystalline superstructures, at temperatures between Tg and Tm. They strongly affect the properties within this temperature range. It is generally accepted that glass transition is due to large scale main chain molecular movement. Other relaxations may happen at temperatures below glass transition. They are caused by the motions of smaller segments within the main chain or pendant groups and may correlate to important properties, such as toughness. Cold crystallization may also happen in crystalline polymers at temperatures above the Tg. It appears as an increase in the storage modulus.
E’
γ relaxation
β relaxation α relaxation
E’’
Temperature Figure 26. Idealized plot of the moduli evolution with the temperature showing typical relaxations in a thermoplastic polymer.
203
FUNDAMENTALS OF DMA
Low Frequency High Frequency
E’
E’’
Temperature Figure 27. Idealized plot showing how the frequency shifts the relaxation peaks along the temperature axis. 9. Applications
•
•
• • •
•
Determining mechanical properties, such as the storage modulus, loss modulus and loss tangent of materials over a spectrum of time or frequency and temperature. Although there are many quantities which can be determined by DMA, knowing any two allows one to calculate the remainder. Determining mechanical Tg. DMA is a very sensitive method for measuring glass transition. It is especially indicated for some polymers that show a slight change of heat flow at glass transition so that it cannot be seen by DSC. The reason is that their content in the amorphous phase is small. Nevertheless, even in those cases, the stiffness changes sharply at glass transition. Determining low temperature relaxations, common to impact resistance behavior. Predicting long term mechanical behavior. Analyzing thin samples, fibers and supported systems. The high sensitivity of DMA for measuring stress and strain means that one can analyze samples where very small forces are involved. This is the case of fibers and films. For non self standing samples, it is possible to use a support material. The support should show a constant storage modulus throughout the experiment and a very low loss modulus. Some classical forms of support include a wire mesh or a thin glass plate. Analyzing cure reactions. The gel point can be accurately measured by means of DMA. The value that is reported for the gel point is the time at which the storage modulus equals the loss modulus (see Figure 28). It is relatively independent of the frequency. Vitrification appears in isothermal experiments through DMA as an increasing step in the storage modulus (see Figure 29). The vitrification time
204
RAMÓN ARTIAGA AND ANA GARCÍA
•
is calculated as the extrapolated end point of the process. Cure kinetics can be calculated from gel times obtained at different temperatures. Developing property- structure relationships.
Tan δ = 1 G’’ G’
tgel Time
Figure 28. Idealized plot of G’ and G’’ illustrating the gel point location.
E’
tv
Time Figure 29. Idealized plot of E’ along the vitrification process.
FUNDAMENTALS OF DMA
205
10. Choosing an instrument
Features that should be taken into account for choosing an instrument: •
• • •
Frequency range. Although one can compare instruments according to their specifications, it is important to take into account that very low frequencies are not commonly used on the whole since they are time consuming and do not report practical information. Range of strain amplitude. Sensitivity in displacement Sensitivity in force.
References
1. Barnes, H. A., A handbook of elementary rheology, University of Wales Institute of Non-Newtonian Fluid Mechanics, 2000. 2. Barnes, H. A., Hutton, J. F., and Walters, K., An introduction to rheology, Elsevier, 1989. 3. Graessley, W.W. Viscoelasticty and flow in polymer melts, in Mark J. E. et al., Physical Properties of Polymers. 4. Ullbrich, R. R., Introduction to Rheology, NATAS Short Course, Orlando, October 2000. 5. Rohn, C. L., Analytical polymer rheology: structure-processing-property relationships, Hanser, 1995. 6. Nielsen, L. E., Landel, R. F., Mechanical properties of polymers and composites, Marcel Dekker, 1994. 7. Mark, J. et al., Physical properties of polymers, Americal Chemical Society, 1984, p. 102. 8. Tschoegl, N. W., The Phenomenological Theory of Linear Viscoelastic Behavior: An Introduction, Springer, Berlin, 1989. 9. Williams, M. L., Landel, R. F., Ferry, J. D., J. Am. Chem. Soc.,77, 3701 (1995).
This page intentionally left blank
Dynamic Mechanical Analysis of Thermosetting Materials R. Bruce Prime IBM (Retired) / Consultant
[email protected] As described in an earlier paper “Thermal Analysis in Thermoset Characterization,” thermosetting polymers are unique. Unlike thermoplastic polymers, chemical reactions are involved in their use. As a result of these reactions the materials cross-link and become “set”, i.e. they can no longer flow or dissolve. In this paper the behavior and characteristics of thermosetting materials are briefly reviewed (see earlier paper for a more detailed discussion), followed by a detailed discription of dynamic mechanical analysis of the cure process, and ending with some comments on properties of cured thermosets. Cure is illustrated schematically in Fig. 1 for a material with co-reactive monomers such as an epoxy-diamine system. For simplicity the reaction of a difunctional monomer with a trifunctional monomer is considered. Reaction in the early stages of cure {(a) to (b)} produces larger and branched molecules and reduces the total number of molecules.
Figure 1. Schematic, two-dimensional representation of thermoset cure. For simplicity difunctional and trifunctional co-reactants are considered. Cure starts with A-stage monomers (a); proceeds via simultaneous linear growth and branching to a B-stage material below the gel point (b); continues with formation of a gelled but incompletely cross-linked network (c); and ends with the fully cured, C-stage thermoset (d). From Ref. 1.
208
R. BRUCE PRIME
Macroscopically the thermoset can be characterized by an increase in its viscosity η as shown in Fig. 2. As the reaction proceeds {(b) to (c)}, the increase in molecular weight accelerates and all the chains become linked together at the gel point into a network of infinite molecular weight. The gel point coincides with the first appearance of an equilibrium (or time-independent) modulus, also shown in Fig. 2. Reaction continues beyond the gel point {(c) to (d)} to complete the network formation. Macroscopically physical properties such as modulus build to levels characteristic of a fully developed network. 0
Conversion
η0
100%
Ge
,
Newtonian liquid
Network at the gel point
Hookean solid
Figure 2. Macroscopic development of rheological and mechanical properties during network formation, illustrating the approach to infinite viscosity and the first appearance of an equilibrium modulus at the gel point. From Ref. 2. Figure 2 illustrates the macroscopic progress from uncured to fully cured thermoset [2]. The uncured thermoset, often a mixture of monomers, is a Newtonian liquid. As cure progresses the viscosity increases with increasing molecular weight which can be monitored by rheological measurements. As the viscosity approaches infinity at the gel point steady shear measurements reach their limits. Oscillatory or dynamic rheology and dynamic mechanical measurements can characterize material in the gelation region. Note that while DMA may be applied to uncured materials or materials below their gel point, the samples will require a support such as metal shim, glass fabric or wire mesh. As shown in this figure, while thermosetting materials will have a dynamic modulus below the gel point, it is only above the gel point that they will have an equilibrium or time-independent modulus. DMA is especially well suited to characterize supported samples from pre-gelation, or gelled unsupported samples, to the completion of cure, as well as fully cured thermosets.
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
209
Epoxy Cure 5.4°C/% Conversion 90 - 100%
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
DSC Fractional Conversion
(1 - α) In(Tg0) + In(Tg) = (1 - α) +
ΔCp∞ α In(Tg∞) ΔCp0 ΔCp∞ Venditti and Gillham, α J.Appl.Poyl.Sci. 64, 3, (1997) ΔCp0
Figure 3. Figure 3 shows the Tg - conversion relationship for a typical epoxy-amine [3] fitted to the DiBenedetto equation [4]. Also shown is the Venditti-Gillham equation [5] relating Tg and conversion. Because of the 1:1 relationship between Tg and conversion, DMA is capable of monitoring progress of the cure reaction through measurement of Tg, as illustrated in Fig. 4. Shown are the Tg - time curves for the same epoxy system which give the same information as the conversion – time curves shown in the earlier paper. Epoxy-Amine Cure DGEBA-PACM-20 (1:1) Tg∞ = 178°C
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
Figure 4.
210
R. BRUCE PRIME
To review, thermoanalytical techniques include differential scanning calorimetry (DSC), rheology, dynamic mechanical analysis (DMA), thermal mechanical analysis (TMA) and thermogravimetric analysis (TGA). DSC measures heat flow into a material (endothermic) or out of a material (exothermic). Thermoset cure is exothermic. DSC applications include measurement of Tg, conversion α, the reaction rate dα/dt and the heat capacity Cp. Gelation cannot be detected by DSC but vitrification can be measured by modulated-temperature DSC (MTMDSC). Rheology measures the complex viscosity in steady or oscillatory shear. In oscillatory shear the advance of cure can be monitored through the gel point and both gelation and the onset of vitrification can be detected. DMA, the subject of this paper, measures the complex modulus and compliance in several oscillatory modes. Gelation and vitrification can be detected, and the cure reaction can be monitored via Tg and beyond the gel point in the absence of vitrification via the modulus. Tg, secondary transitions below Tg, creep and stress relaxation can also be measured. TMA measures linear dimensional changes with time or temperature, sometimes under high loading. Measurements include linear coefficient of thermal expansion (CTE), Tg, creep and relaxation of stresses. TGA measures mass flow, primarily in terms of weight loss. Measurements include filler content, weight loss due to cure, outgassing, and thermal and thermo-oxidative stability. Dynamic mechanical analysis (DMA) measures the complex modulus and compliance as a function of temperature, time and frequency. Examples of sample deformation modes include fixed frequency oscillation (single/dual cantilever, 3-point bend, tension, shear sandwich), resonant frequency oscillation, creep and stress relaxation. Thermoset properties measured include storage and loss modulus, storage and loss compliance, tan δ, Tg, secondary transitions below Tg, gelation and vitrification and reaction beyond the gel point. In terms of modulus properties measured are • storage modulus (E', G') which is a measure of stress stored in the sample as mechanical energy • loss modulus (E", G") which is a measure of the stress dissipated as heat • tan į (E"/E' = G"/G') which isҏthe phase lag between stress and strain, and a typical measure of damping or energy dissipation Figure 5 illustrates typical DMA of cured thermosets. There are three regions, a glassy region which is is similar for all thermosets and is characterized by very high storage modulus >1 GPa, low loss modulus and very low tan δ. A glass transition region where the storage modulus can decrease by a factor of 10 – 100 and the loss modulus and tan δ reach maxima. And a rubbery plateau region with a stable storage modulus proportional to the cross-link density and low loss modulus and tan δ.
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
log E' (G') and E" (G")
Glassy Region
Transition Region
211
Rubbery Plateau Region
Highly Crosslinked
Very hard and rigid solid
LightlyCrosslinked
Storage Modulus Loss Modulus
Stiff to soft rubber
Temperature
Figure 5. Schematic dynamic mechanical analysis of lightly and highly cross-linked thermosets. For fixed frequency, DMA may be performed in single cantilever, dual cantilever, three-point bend, tensile or shear-sandwich deformation modes. Two illustrations are shown below. The first, shown in Fig. 6, is the single cantilever mode. Neat samples can be made in Teflon or silicone rubber molds, and are generally 0.5 to 2.5 mm thick. This method can be quantitative if the dimensions are precise, but it requires length correction for the clamps. For very high modulus samples the threepoint bend mode may be preferred, which has the additional benefit of eliminating clamping effects.
Sample
Stationary Clamp Movable clamp Figure 6. Single cantilever mode for dynamic mechanical analysis. Courtesy TA Instruments. The second method illustrated is the wire mesh method, which is done on the tensile mode [1,6]. The wire mesh is usually stainless steel ~0.1 mm thick which may be easily coated with a liquid or paste material. The thickness is uniform and may be varied by using differing thicknesses of mesh. Because the mesh sample is mounted on a 45°
212
R. BRUCE PRIME
bias, where the mesh itself has almost no shear resistance, this technique gives a very sensitive measure of Tg. Semi-quantitative values G' and G"for may be extracted via composite analysis. Samples supported on metal shim or glass fabric are normally analyzed in the dual cantilever mode.
Stationary Clamp Wire Mesh Sample 45° Bias Movable clamp
Figure 7. Wire mesh technique for dynamic mechanical analysis of liquids and pastes (1,6). Gelation is the first appearance of a cross-linked network. It is the irreversible transformation of a liquid to a gel or rubber and it is accompanied by a small increase in the storage modulus. A distinction may be drawn between molecular or chemical gelation (the phenomenon) and macroscopic gelation (its consequence). Chemical gelation as defined by Flory is the approach to infinite molecular weight. It is an isoconversional point (αgel) that is observable as the first detection of insoluble, crosslinked gel in a reacting mixture (sol). Chemical gelation is also defined as the point where tan δ becomes frequency independent [7]. Macroscopic gelation may be observed as the approach to infinite viscosity, the first evidence of an equilibrium modulus, the G' = G" crossover in a rheology measurement, or as a loss peak in fiber and mesh supported systems. As just mentioned the gel point is often estimated in a rheological measurement as the crossover where G' = G" or tan δ =1, as illustrated in Fig. 8 for 5 Minute Epoxy. In this material the “5 Minutes” refers the gel point or maximum work life at room temperature, and the value measured can be seen to be very close to this time. As mentioned above a better measure is the point where tan δ becomes frequency independent, which is usually close to but not necessarily equal to one. Note that a rubbery plateau modulus between 105 and 106 Pa is reached after long times, demonstrating that vitrification, which would result in a modulus >1 GPa, did not occur in this sample.
213
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
1000000
1000000
100000
100000
10000
G' = G" crossover as measure of gel point
1000
1000
100.0
100.0
10.00
10.00
Time (minutes)
TA Instruments
1.000 0
200.0
400.0
600.0
G'' (Pa)
G' (Pa)
10000
800.0
1000
1.000 1200
Figure 8. Dynamic rheology of 5 Minute Epoxy showing measurement of gelation. Vitrification is glass formation due to Tg increasing from below Tcure to above Tcure as a result of reaction. It only occurs when Tcure < Tg∞ and begins when Tg = Tcure (the definition of vitrification). Vitrification is reversible by heating: liquid or gel ⇔ glass. It causes a dramatic slowing of the rate of cure as a result of a shift in the reaction from chemical control to diffusion control. Vitrification is mechanically observable as a large increase in modulus and frequency dependent loss peak(s). This phenomenon is illustrated in the following two slides. It is also observable by MTDSC as a step decrease in heat capacity (see earlier paper on “Thermal Analysis in Thermoset Characterization”.
R. E. Wetton, in Developments in Polymer Characterization (J. V. Dawkins, ed), pp. 179-221, Elsevier, London (1986)
Figure 9. Figure 9 shows the cure through vitrification at 21°C of 1 mm thick epoxy on 50 μm steel foil, coated on one side [8]. Measurements were made by DMTA at 1 Hz in the dual cantilever mode. E' and tan δ were calculated from a composite analysis. The time
214
R. BRUCE PRIME
to vitrify can be taken from the peak in tan δ at ~20 minutes. The rise in E' to >1 GPa confirms that the sample has vitrified. Figure 10 shows multi-frequency DMA during isothermal cure at 40°C of an epoxy in the shear sandwich mode [9]. Vitrification is observed as a series of frequencydependent tan δ peaks and a frequency dependent rise in the storage modulus to ~1 GPa. Note that vitrification occurs at shorter times with increasing frequency, i.e. it is not isoconversional. Also note the frequency independence of tan δ initially which suggests that gelation may have occurred very early in the reaction. log G' (Pa) 100 Hz 0.1 Hz
tan d
100 Hz 10 Hz
1 Hz
0.1 Hz
Wetton et al., Proc. 16th NATAS Conf., 64 (1987)
0
2 hours
Figure 10.
Peak A gelation Peak B vitrification Cycloaliphatic epoxy / HHPA (anhydride) Gillham et al., Polym. Prepr., Am. Chem. Soc., Div. Polym. Chem. 15(1), 241 (1974)
Figure 11.
215
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
Figure 11 is a classic example of gelation and vitrification during isothermal cure of an epoxy by torsional braid analysis (TBA) [10]. At 50°C the sample vitrifies and quenches the reaction prior to gelation so that only vitrification is observed. Between 80 and 150°C gelation is observed followed by vitrification. At 175°C and above only gelation is observed because the cure temperature is above Tg∞ and the system cannot vitrify. These measurements form the basis for the construction of timetemperature-transformation (TTT) diagrams. 10
Partially cured, fiberglass reinforced, ends wrapped in Al foil Seiko DMS-110 FT, dual cantilever, 2°C/min. Sichina and Matsumori, Proc. 20th NATAS, 41 (1991)
tan d
log E' (Pa)
9
8
Temperature (°C)
Figure 12. Multi-frequency DMA at 2°C/min of a partially cured B-staged amine-epoxy adhesive [11]. Figure 12 shows the glass transition as a series of frequency-dependent tan δ peaks and ~100x decrease in storage modulus, followed by gelation near 140°C as a series of frequency-independent tan δ peaks accompanying a small increase in modulus. In the next section we treat kinetics by means of DMA measurements. The objective is to analyze phenomena associated with the processing and use of thermosets, which includes process life (shelf life, work life), cure, and functional life or aging. Three methodologies may be used in the study of kinetics by DMA: • Utilize Tg as a measure of conversion • Utilize the iso-conversional nature of gel times • Utilize the relationship between elastic modulus and cross-link density In Figure 13 we review from the earlier paper the use of Tg as a measure of conversion in the construction of a master cure curve at a reference temperature of 140°C showing chemical conversion as a solid line and vitrification at cure temperatures below Tg∞ followed by diffusion controlled cure demarcated by arrows.
216
R. BRUCE PRIME
Epoxy-Amine Cure DGEBA / PACM-20 (1:1) Tg∞ = 178°C E = 15.2 kcal/mol, Tref = 140°C
Wisanrakkit and Gillham, J.Appl.Poly.Sci. 42, 2453 (1991)
at 140°C
Figure 13.
vit
gel
Figure 14a.
gel time (s)
10000
y = 1E-05e
6230,6x
1000
100 0,0026
0,0028 0,003 1/T (K)
Figure 14b.
0,0032
217
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
Shown in Fig. 14a is the isothermal cure at 60°C of a thermoset coating on wire mesh by DMTA [14]. Both gelation and vitrification are clearly detected. Shown in Fig. 14b is the Arrhenius plot of gel times versus cure temperature, yielding an activation energy of 12.4 kcal/mole. Figure 15 shows master cure curves of Tg versus time at 25° from time-temperature superposition of Tg – time data using the activation energy from gel times. Results are shown on two time scales, one showing several days and the other several years. While this coating would require a very long time to reach full cure at room temperature it is also to be expected that vitrification may significantly slow the reaction before complete cure (Tg = 107°C). Note, however, that this coating reached a Tg of over 80°C at 25°C without apparent slowing of the cure process, suggesting that the diffusion controlled cure is not significantly slower that the cure under chemical control. While uncommon, this has been observed in other systems [1].
120
Tg (°C)
100 80
24°C
60
60°C 90°C
40
120°C
20 0 0
50
100
150
200
Time @ 25°C (hours)
Figure 15a. 120 100
Tg (°C)
80
24°C 60°C
60
90°C 120°C
40 20 0 0
20000
40000
Time @ 25°C (hours)
Figure 15b.
60000
218
R. BRUCE PRIME
Next we utilize the relation between cross-link density and elastic modulus to measure DMA degree of cure (DOC). DOC is computed from the storage modulus (G' or E') beyond gelation in the absence vitrification [13-15] as DMA DOC = (G'i - G'0) / (G'∞ - G'0) Shown in Fig. 16 are the results of DMA DOC analyses to characterize the cure of a powder coating on wire mesh [16]. DOC – time curves were superimposed both to measure activation energy and to create the master cure curve on the right. A good correlation was observed between DOC and impact resistance and it was found that DOC was a fair predictor of other standard performance tests. DOC master curves were a much better predictor than those based on single-heating rate DSC measurements, which were found to overestimate the actual cure.
Fig. 16a
Fig. 16b
Neag and Prime, J. Coat. Tech. 63(No.797), 37 (1991)
Figure 16. The next topic is kinetic viscoelasticity, a term coined by Professor James Seferis to describe dynamic mechanical properties of reacting materials [17,18]. In kinetic viscoelasticity appropriate parameters of the viscoelastic model become functions of cure kinetics. This methodology was found to give a quantitative description of dynamic mechanical behavior during isothermal cure and a qualitative interpretation of DMA during isothermal and dynamic scans of uncured and partially cured thermosets. The next slide addresses this qualitative interpretation of DMA results, which are commonly encountered.
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
219
*
*
*
Figure 17. In Fig. 17 the solid line is experimental temperature and the dashed line is Tg, which is increasing with cure. For the SLOW HEATING RATE (relative to reaction rate) following devitrification where the curves first cross Tg increases fast enough to cross the experimental temperature curve, whereupon the sample vitrifies. Because temperature is increasing reaction continues in the glassy state and Tg continues to rise until the reaction is complete, where the sample devitrifies on further heating. Note that there is a loss peak each time the curves cross, from either devitrification or vitrification. For the INTERMEDIATE HEATING RATE the curves just touch where vitrification and devitrification occur simultaneously. For the FAST HEATING RATE the temperature remains above Tg and reaction is completed without vitrification. Shown in Fig. 18 is DMA of a partially cured, gelled epoxy [19]. The heating rate is 1°C/min which is slow relative to reactivity of the thermoset. Three loss peaks were observed: 1. Devitrification of the partially cured thermoset 2. Revitrification due to reaction 3. Devitrification of the fully cured thermoset Two loss peaks were observed in the DMA at 5°C/min, devitrification of the partially cured thermoset and revitrification near end of reaction followed immediately by devitrification at end of reaction. Only devitrification of the partially cured thermoset was observed at
220
R. BRUCE PRIME
10°C/min. 1
a)
b)
2 3
25 x 12 x 2 mm sample, TGDDM / DDS (excess epoxy) Dillman, PhD Dissertation, Dept. Chem. Eng., University of Washington (1988)
Figure 18. DMA of partially cured, gelled epoxy at 1°C/min. [19]. a) Storage modulus,b) Loss modulus Figure 19 shows DMA of an uncured and partially cured epoxidized novolakanhydride on wire mesh [20]. All samples were heated at 5°C/min. The uncured sample shows a small step increase in G' and a peak in G'', indicating gelation. Focusing on the G'' curves, samples cured at 40 and 60° show two peaks, devitrification of the partially cured thermoset and a doublet whre revitrification is followed almost immediately by devitrification. The 80° sample shows a merging of all peaks as heating and reaction rates are comparable. Only one very broad peak is observed in the 100° sample and the others show only devitrification. Notice the uniform increase in Tg with cure. (G')
(G")
Figure 19.
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
221
The last topic covered is the DMA of cured thermosets. Properties that can be measured include: • The glass transition Tg • Modulus - temperature - frequency 1 • Physical aging Time - temperature superposition • Creep and stress relaxation2 The next slide shows the temperature – frequency behavior of a modified toughened epoxy. The temperature was raised in a step-isothermal mode with 2.5°C intervals. During the isothermal portions frequency was scanned from 0.01 to 1 Hz. The Tg for this system was 170°C, taken as the maximum in the loss modulus at 1 Hz. The most common application of DMA to thermosets is measurement of the glass transition temperature. Tg is the transition of a glassy solid to a liquid or rubber in an amorphous material. It is accompanied by a 10 – 1000x decrease in storage modulus or increase in storage compliance. Tg is measured as the maximum in loss modulus, loss compliance or tan δ. It is a frequency dependent transition where ΔTg is typically 5 6°C per decade of frequency. The glass transition is accompanied by increases in the coefficient of thermal expansion (~3x), diffusion coefficient, water sorption and transport. It is a mechanism for the relaxation of residual or stored stress where heating to just above Tg will allow those stresses to relax. Figure 20 shows frequency multiplexing during a 1°C/min DMA scan of a moderately cross-linked epoxy [23]. Frequency was stepped between five values ranging from 0.33 to 30 Hz as the sample was heated. Both storage modulus and tan δ can be seen to shift to higher values with increasing frequency. The typical dependence of Tg on frequency can be seen in the tan δ data. An Arrhenius plot of log(frequency) versus reciprocal absolute temperature yielded an activation energy of 383 kJ/mol for the Tg relaxation process. Log E' (Pa )
1°C/min
Tan δ
1°C/min. Gearing and Stone, Polym. Comp.5, 312 (1984)
°C
Figure 20.
1
Consists of a densification process inherent in the nonequilibrium character of the glassy state. Takes place below Tg and is observed by DMA as an increase in modulus and decrease in damping (22). 2 Creep compliance = strain(t) / stress, stress relaxation modulus = stress(t) / strain.
222
R. BRUCE PRIME
Time-temperature superposition is a concept that was introduced in the 1950’s by Williams, Landel and Ferry [24]. It addresses the temperature dependence of relaxation processes in amorphous polymers above Tg which involves the temperature dependence of free volume. Master curves of, e.g., modulus-frequency, creep compliance and stress relaxation modulus, may be constructed over several decades of time or frequency. Master curves allow the concise presentation of all data on one curve (the “big picture”) and permit behavior to be predicted at relevant conditions, e.g. over the lifetime of a structural part. The WLF equation is shown at the bottom of Fig. 21. aT is the shift factor.
Master Curve
T1 > T2 > T3 Relaxation Process
aT aT
log reduced time (t/aT)
log time log aT = t2 / t1 = - C1
T - T0 C2 + T - T0
Figure 21.
15 x 10 x 2 mm sample TAI 983 DMA Frequency scans at 2.5°C intervals Frequencies: 0.01, 0.05, 0.1, 0.5, 1 Hz Tg = 170°C (E" max, 1 Hz)
Woo, Seferis and Schaffnit, Polym. Compos. 12, 263 (1991)
Figure 22. Figure 22 shows the temperature – frequency behavior of a modified toughened epoxy [25]. The temperature was raised in a step-isothermal mode with 2.5°C intervals. During the isothermal portions frequency was scanned from 0.01 to 1 Hz. The Tg for this system was 170°C, taken as the maximum in the loss modulus at 1 Hz.
DYNAMIC MECHANICAL ANALYSIS OF THERMOSETTING MATERIALS
223
15 x 10 x 2 mm sample TAI 983 DMA Frequency scans at 2.5°C intervals Frequencies: 0.01, 0.05, 0.1, 0.5, 1 Hz
Woo, Seferis and Schaffnit, Polym. Compos. 12, 263 (1991)
Figure 23. Figure 23 shows the storage modulus – frequency curves through the glass transition interval [25]. Note that this is the same data shown above but plotted in a different fashion. To construct the master curves the reference curves were chosen as the storage modulus curve at 170°C (highlighted above) and the loss modulus curve at 170°C (not shown). Data >170° were shifted to the left and data <170° were shifted to the right to form the master curves of storage and loss modulus versus frequency at a reference temperature of 170°C in Fig. 24. Note that decreasing frequency is equivalent to increasing temperature. When coupled with a quantitative description of the shift factors (aT) versus temperature, e.g. by means of the WLF equation, such master curves completely define the viscoelastic properties of a material. Reference Temperature = 170°C = Tg (E" max, 1 Hz)
( ← Temperature)
Figure 24.
Woo, Seferis and Schaffnit, Polym. Compos. 12, 263 (1991)
224
R. BRUCE PRIME
Bibliography 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.
R. B. Prime, Chapter 6 “Thermosets” in Thermal Characterization of Polymeric Materials (E. A. Turi, ed.) Academic Press, San Diego (1997). H. H. Winter, et al. in Techniques in Rheological Measurement (A. A. Collyer, ed.) Chapman and Hall, London (1997). G. Wisanrakkit and J. K. Gillham, J. Appl. Poly. Sci. 42, 2453 (1991). A. T. DiBenedetto, J. Polym. Sci., Par B: Polym. Phys. 25, 1949 (1987). R. A. Venditti and J. K. Gillham, J. Appl. Polym. Sci. 64, 3 (1997). S. H. Dillman, J. C. Seferis and R. B. Prime, Proc. North Am. Therm. Anal. Soc. Conf. 16th, 429 (1987). H. H. Winter, Polym. Eng. Sci. 27, 1698 (1987). R. E. Wetton, in Developments in Polymer Characterization – 5, (J. V. Dawkins, ed.), pp. 179-221. Elsevier, London (1986). R. E. Wetton, P. W. Ruff, J. C. Richmond, and J. T. O’Neill, Proc. North Am. Therm. Anal. Soc. Conf. 16th, 64 (1987). J. K. Gillham, J. A. Benci, and A. Noshay, Polym. Prepr., Am. Chem. Soc., Div. Polym. Chem. 15(1), 241 (1974). W. J. Sichina and B. Matsumori, Proc. North Am. Therm. Anal. Soc. Conf. 20th, 41 (1991). A. T. Eng, L. M. McGrath, F. D. Pilgrim, and R. B. Prime, North Am. Therm. Anal. Soc. Conf. 30th, Poster Paper (2002). P. G. Babayevsky and J. K. Gillham, J. Appl. Polym. Sci. 17, 2067 (1973). T. Provder, R. M. Holsworth and T. H. Grentzer, ACS Symp. Ser. 203, 77 (1983). M. E. Koehler, A. F. Kah, C. M. Neag, T. F. Niemann, F. B. Mahili and T. Provder, Anal. Calorim. 6, 361 (1984). C. M. Neag and R. B. Prime, J. Coat. Technol. 63 (797), 37 (1991). S. H. Dillman and J. C. Seferis, J. Macromol. Sci. Chem. A26, 227 (1989). J.-D. Nam and J. C. Seferis, J. Polym. Sci: Polym. Phys. Ed., 37(9), 907 (1999). J. C. Seferis, personal communication. S. H. Dillman, Ph.D. Dissertation, Dept. Chem. Eng., University of Washington (1988). N. M. Patel, C. H. Moy, J. E. McGrath, and R. B. Prime, Proc. North Am. Therm. Anal. Soc. Conf. 18th, 232 (1989). L. C. E. Struik, Physical Aging in Amorphous Polymers and Other Materials, Elsevier, New York (1978). J. W. E. Gearing and M. R. Stone, Polym. Compos. 5, 312 (1984). M. L. Williams, R. F. Landel and J. D. Ferry, JACS 77, 3701 (1955). E. M. Woo, J. C. Seferis and R. S. Schaffnit, Polym. Compos. 12, 273 (1991).
Fundamentals and Applications of DEA C.A. Gracia-Fernández, S. Gómez-Barreiro, Lisardo Núñez-Regueira Research Group TERBIPROMAT, Departamento Física Aplicada, Universidade de Santiago de Compostela. Av. J. M. Suárez Núñez, 15782 Santiago, Spain
[email protected] 1. Introduction Dielectric techniques have been used to follow chemical reactions since many years ago. In 1934, Kienle and Race reported a study on poliesterification reactions using dielectric measurements. In this forward paper, many of the important issues in nowadays studies were identified. Among these issues were included: the correlation between conductivity and viscosity, the fact that conductivity does not present sudden changes at gelation, the fact that ionic conductivity generally governs the observed dielectric properties, etc. A constant in all branches of science is that the study starts by the simplest system, thus needing only few parameters to go on. As the complexity of the system increases, more parameters are needed and a series of approaches, that reduce the grade of veracity of the equation, must be introduced. This is the case of both dielectric analysis and polymer studies. The application of dielectric analysis technique to simple molecules is certainly easy. However, the polymers are systems with such a complexity that there is no model who explains their behaviour in the two most relevant variables: temperature and frequency. In fact, the glassy transition, one of most decisive characteristics in the properties of a polymer, is not completely understood from the theoretical point of view. This is the reason why the mathematical expressions that relate the different variables involved in the system are often experimental. Some of these equations are those developed by Arrhenius, Vogel, Williams-Landel-Ferry, Cole-Cole, Cole-Davison and Havriliak-Negami, each of them valid for different conditions and states of aggregation of different classes of polymers. From 1958, many articles were reported mainly on epoxy materials. Some of the problems that disturbed this field have been the excessive empirical nature of the research, hampered by the use of the deficient models measurements have been connected in very few cases with measurements of other properties of interest. 2. Dielectric Analysis (DEA) Dielectric analysis measures changes in the properties of a material as it is submitted to a cyclic (commonly) electric field. It can supply information on: Dielectric properties thermal transition, molecular relaxation, rate of cure, degree of cure, etc. DEA follow the complete transformation of a thermoset from a low-molecularweight compound to a solid crosslinked system of infinite molecular weight. The use of disposable microdielectric sensors allows dielectric measurements to be made in the laboratory as well as into reactive manufacturing processes. The information on dielectric properties can be obtained almost instantaneously without significant
226
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
disturbance of the process. Electrical properties of polymer, such as: permittivity, loss factor and conductivity can be directly obtained by dielectric analysis. To perform dielectric measurements, the material is situated in the middle of two electrodes, and a periodic voltage is applied between the electrodes. This voltage originates an electric field in the sample. In response to this electric field, displacement of charged units takes place in the material giving rise to polarization and ionic conduction; that is, to a current whose amplitude is dependent on the frequency the measurement and, also, on the temperature and structural properties of the material under studies. 3. Basic principles As it was previously mentioned, dielectric analysis is related to the measurement and characterisation of the reaction of a material to an applied periodic electric field. For this study, the material is situated between two electrodes, a sinusoidal voltage is applied across the electrodes and the current response is measured. On a molecular point of view, the current consists of a displacement of electrical charged units in the sample material. The charged units can be dipoles or mobile units (free electrons, ions). In the case of dipoles, they will align in the direction of the applied field. If the charged units are mobile, the electric field will originate conduction of net charge from one electrode to the other. In the case of polymers, both dipoles and mobile charges are present. The applied electric field gives rise to polarization and ionic conduction. The sum of polarization plus conduction is the current to be measured. This current is at the same frequency as the voltage, but is shifted in phase and amplitude. Both the phase angle and the relative amplitude change are related to the properties of the sample between the electrodes. To interpret frequency and phase difference in terms of the dielectric properties of the material it is necessary to have data on the electrode. The electrodes assembly plays a twofold role. On the hand it transmits the applied voltage to the sample, on the other hand, it collects the response current. The applied electric field E, is related to the current density, J through the relative complex dielectric constant of the sample, ε* J=iwε* E
(1)
where i is the imaginary unit = √-1, ωis the angular frequency (rad s-1) and ε0 is the permittivity of free space (8,85x10-12 F m-1). It is assumed that the medium is homogeneous and its behaviour is linear respect to the electric field. The complex dielectric constant is defines as:
ε * = ε ´−iε ´´
(2)
where ε´ is the relative bulk permittivity and ε´´ is the relative bulk loss factor, both are frequency dependent and depends also on temperature and the structure of the material sample. The ratio ε´´/ε´ is known as the loss tangent or dissipation, tan δ. Both the permittivity and the loss factor are the characteristic dielectric properties of a material.
FUNDAMENTALS AND APPLICATIONS OF DEA
227
The relative permittivity, ε´, is related to the capacitive or energy storing ability of a material, and measures the electrical polarization of the sample per unit applied electric field. It is composed of the unrelaxed permittivity εu, that some investigators expressed as ε∞, the baseline permittivity, and an additional term εd´ associated with dipole alignment: (3) ε´ = ε∞ + εd´ The unrelaxed permittivity is originated by electronic and atomic polarizations and, at low frequencies, is frequency independent. Typical dielectric studies are carried out at frequencies below 1 MHz. At frequencies of this order, and temperatures below the glass transition, there can be dipolar contributions to ε´ from restricted motions of polar groups. As the temperature decreases to low temperatures, or the frequency increases to high values, the polar groups lose the ability to orient with the applied electric field and ε´ significant decreases originating the lower transitions (β, γ, ...). The relative loss factor, ε´´, measures the energy necessary for molecular motion in a electric field. It originates from two sources: the energy losses owing to the molecular dipoles orientation εd´´, and the energy losses due to the conduction of ionic species εc´´: (4) ε´´ = εd´´ + εc´´ At temperatures well below Tg, polymers generally have loss factors less than 0.1. However when the temperature reaches the Tg value and above, the loss factor can be as high as 109. 4. Kind of experiments Thermal analysis experiments have in common that the temperature is always under complete control of the investigator. In this sense, the different experiments can be divided into two main groups: isothermal experiments, in which the temperature remains constant throughout the experiment, and dynamical experiments, where the temperature is changed (increasing or decreasing) at a fixed constant rate. Sometimes, a combination of both methods is advisable. The choice of a particular method depends on the type of study to be done. In particular, isothermal experiments are carried out when the objective is to analyse the behaviour of dielectric properties as a function of time or frequency. One other point to consider is the type of process experienced by the sample; for example, is the sample suffering a polymerization process while the experiment is taking place or, on contrary, if the sample is yet polymerized. In dynamical experiments the sample is submitted to a controlled constant heating (or cooling) rate. One of the most direct applications of this method is the study of the possible transitions of a polymerized material and, in particular, the obtention of the glass transition temperature. 4.1.
Frequency domain
One other variable that can be controlled in a dielectric analysis experiment is how the electric field can be applied either in sinusoidal or step-like form. Figure 1 shows the electric current in a dielectric polymer as a function of time when the sample is subjected to a constant electric field[1].
228
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
Figure 1. Typical change of a transient current in a dielectric material. The main interest of the experiment in which the electric field is step.like applied focuses on the knowledge of φ(t), that is, the response answer of the dielectric that is closely related to the molecular relaxation of the polymer. However, most of the times, the applied field is sinusoidal one whose frequency can be experimentally controlled. In this case, the dielectric constant (ε*) is a frequency dependent complex function that depends also on some other factors. The relationship between φ(t) and ε* takes the form: ∞
ε * ( w) = ε u + (ε r − ε u ) ³ φ (t )e − iwt dt
(5)
0
where εu is the previously mentioned unrelaxed permittivity (permittivity when ω trends to infinite) and εr is the relaxed permittivity (ω trends to zero). The difference (εr - εu) is known as the dipole strength. It can be observed that the change from φ(t) to ε*(ω) is a change of domains from time to frequency. The search for φ(t) becomes in one of the main objectives for the polymer molecular knowledge. However, the search for the form of the relaxation function is not a simple task. Debye[2] proposed a relationship: φ (t ) = e −t / τ (6) where τ is the system characteristic relaxation time. Substitution of Eq. (6) into Eq.(5) gives: (ε − ε ) (7) ε * ( w) = ε u + r u 1 + iwτ The complex dielectric constant of the material can be divided its real and imaginary parts: (ε − ε u ) ε ′ = εu + r (8) 2 1 + (ωτ d ) (ε − ε )ωτ ε ′′ = r u 2 d (9) 1 + (ωτ d )
FUNDAMENTALS AND APPLICATIONS OF DEA
229
The Argand diagram is a very helpful method to check the behaviour of both the real and imaginary parts of the complex dielectric constant. In it, ε´´ is plotted versus ε´ at different frequencies. For a system following Debye´s model, the Argand plot (Fig.2) has the form of a semicircle with radios (εr - εu)/2 and centred in the point [(εr - εu)/2, 0]
Figure 2. Argand plot for a polymer following Debye´s model. In general, polymers do not follow Debye´s model. Cole and Cole[3] proposed an equation that includes a parameter a related to the relaxation times distribution function width. (ε − ε ) ε * ( w) = ε u + r u a (10) 1 + (iwτ ) Respect to Argand diagram the deviation of this equation to Debye´s model reflects in an angular shift of Debye´s semicircle as shown in Fig.3
Figure 3. Argand diagram for a polymer following Cole-Cole equation.
230
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
Davison and Cole[4] modified Debye´s expression by introducing a parameter named b that accounts for the asymmetry of the relaxation times distribution function:
ε * ( w) = ε u +
(ε r − ε u )
(1 + iwτ )b
(11)
See Fig. 4.
Figure 4. Argand diagram for a polymer following Davison-Cole relaxation model. Those last two equations were unified by Havriliak and Negami[5,6] in following expression (ε r − ε u ) ε * ( w) = ε u + (12) (1 + (iwτ ) a )b that successfully fit an enormous amount of experimental results. Figure 5 shows an Argand diagram for a polymer following Havriliak-Negami equation.
231
FUNDAMENTALS AND APPLICATIONS OF DEA
Figure 5. Argand diagram for a polymer following Havriliak- Negami equation. Up to now, we have considered only dielectric contributions. However, there is the phenomenon of conductivity that can overlap with dipole relaxation measurements of ε´ and ε´´. When an electric field is applied on a dielectric material the free charges in it were originating an energy loss that contributes to ε´´ value. ε´´=εdip+σ/ε0w
(13)
where σ is the bulk ionic conductivity. The contribution of the conductivity[7] reflects as a substantial increase in ε´´, as shown in Fig. 6 and in Fig. 7.
Figure 6. Argand diagram for conductive component materials. a > b > c >d.
232
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
Figure 7. Plot of ε´´ versus Lw for a dielectric material. It is important to point out that the ionic conductivity also affects ε´ as free charges can accumulate on the electrodes thus changing the value of the permittivity as measured by the experimental equipment. Kohlrausch[8], and Williams and Watt[9] gave an expression for φ(t): φ (t ) = (e −t / τ ) β (14) with 0 ≤ β ≤ 1. The use of this equation as it happens with Cole-Davison and Havriliak-Negami gives an asymmetric Argand diagram closely related to the asymmetry of the relaxation times distribution function. In general, the data obtains must be analysed through an Argand plot, at a fixed temperature, observing which equation fits better the experimental results. 4.2.
Temperature domain
Temperature has a very strong influence on the dielectric behaviour of a material. Because of this, it is very important to know the relationship between frequency and temperature. For a dynamic experiment at a given frequency, in which the sample is heated at a constant rate, the transition temperature is taken either as that corresponding to the maximum of the ε´´ vs T curve or that corresponding to the maximum of the tan δ vs T plot. In the particular case of the glass transition, the transition temperature is designed as Tg. A simple relationship between f and T is the Arrhenius-like equation: −
Ea
f = f 0e RT (15) where Ea is the activation energy, R the gas constant and f0 a factor. This equation simulates correctly the behaviour of polymer transitions at temperatures below Tg. However, the temperature dependence for the α relaxation in amorphous polymers is usually found to follow the Vogel-Fulcher[10,11] equation:
FUNDAMENTALS AND APPLICATIONS OF DEA
−
233
B
f = Ae T −T0 (16) where A, B and T0 are constants for a given material. Experimentally, it is found that T0 values are in the range between 30 and 70 ºC below the Tg value as measured by DSC. Both Arrhenius and Vogel-Fulcher equations can be used either as functions of the frequency or of its reciprocal the relaxation time. In this case, the Eqs.(15, 16) can be written as: Ea
τ = τe RT
(15 bis)
B T − T0
τ = Ae (16 bis) Williams, Landel and Ferry[12] relate the variable studied to reference value, obtaining from Eq. (16 bis) the following expression, known as the WLF equation: τ − C1 (T − Ts ) (17) Ln = τ s C2 + T − TS where C1 = B / T-T0 and C2 = T-T0. Sheppard and Senturia[13] used this equation to study epoxy systems. Values reported for C1 are in the range between 14 and 18 while C2 values fall in the range 3070. Once of the most important applications of WLF is the construction of master curves [14] based on the time- temperature superposition principle. Experimental frequency and temperature data are usually represented in the form of an ln f vs 1000/T(K) Arrhenius plot. For sub-Tg transitions, Ea can be obtained from the slope of the straight line plot. However, for amorphous polymers, the plot strongly curves as Tg increases following WLF equation. 5. Practical applications to a cured thermoset
As a practical application[15,16], a study on the cured system DGEBA/mxylilenediamine is made. The system has been subjected to two types of temperature programs. One of them consisted of ramps every 2 ºC and keeping the temperature constant for a time that allowed a frequency scanning in the range between 10-1 to 106 Hz. This ensures that the measurements of ε´ and ε´´ have been carried out at the same constant temperature. These experiments are known as isothermal experiments. In the second type of experiments, known as dynamic ones, the temperature is increased at a constant rate of 2 ºC/min. Fig. 8 and 9 show ε´ and ε´´ respectively as functions of temperature in dynamic experiments at 2 ºC/min, carried out in the range between –150 and 280 ºC.
234
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
Figure 8. ε´ as a function of temperature for the system DGEBA (n = 0)/ m-XDA.
Figure 9. ε´´ as a function of temperature for the system DGEBA (n = 0)/ m-XDA.
Fig.8 shows the typical behaviour of an amorphous crosslinked network, in which a gradual increase in temperature originates different transitions thus causing an increase in ε´. This increase is originated by the increasing motion that temperature causes in dipoles allowing them to align with the applied field. It must be pointed out that the dipole orientation capacity depends on the field frequency.
FUNDAMENTALS AND APPLICATIONS OF DEA
235
As the frequency increases the dipole orientation needs increasing temperatures. The different transition are observed as peaks in the ε´´ - T curve. The transition at the higher temperature is known as the α- transition and is related to the material glass transition. This is the reason why Tg is taken as the maximum value in a ε´´ - T curve. The transition at the lower temperature is named the β-transition and is associated to the motion of the side chains. In Fig.9 it can be observed also an increase in the mobility of the ions in the sample. Fig. 10 is an Arrhenius plot for the α and β transitions of the epoxy system studied. It shows a linear behaviour of the β-transition while the α-transition presents a curvature with changes in the slope. This relaxation follows a Vogel equation behaviour. At temperatures high enough, both transitions can overlap originating the αβ transition. In our case, this behaviour does not take place because the system is unable to reach high temperatures without degradation processes.
Figure 10. Arrhenius plot of the α and β transitions of the thermoset BADGE(n=0)/mXDA.
The apparent activation energy corresponding to the β-transition is, in this case, 67kJ/mol. WLF equation was used to calculate C1 and C2 values that characterizing the α-transition resulting C1=16.2 and C2=62.0.
236
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
For the analysis of the isothermal experiments of the cured system, plots of ε´ and ε´´ vs. T were constructed (Figs. 11 and 12) at temperatures close to the α-transition determined through dynamic experiments. 3.8
ε´ ε´ ε´ ε´ ε´ ε´ ε´ ε´ ε´ ε´ ε´
3.6
3.4
3.2
ε ´ 3.0 2.8
134º 138º 142º 146º 150º 154º 158º 162º 166º 170º 174º
2.6
2.4 0.36788
2.71828
20.08554
148.41316
1096.63316
Ln f
Figure 11. Plot of ε´ vs. Ln f at various temperatures.
0.60
ε´´ 134º ε´´ 138º ε´´ 142º ε´´ 146º ε´´ 150º ε´´ 154º ε´´ 158º ε´´ 162º ε´´ 166º ε´´ 170º ε´´ 174º
0.55 0.50 0.45 0.40 0.35 0.30
ε´´
0.25 0.20 0.15 0.10 0.05 0.00 -0.05 -0.10 0.36788
2.71828
20.08554
148.41316
1096.63316
Ln f
Figure 12. Plot of ε´´ vs. Ln f at various temperatures.
FUNDAMENTALS AND APPLICATIONS OF DEA
237
In Fig. 11 it can be seen that ε´ decreases with increasing frequencies. The reason is that the dipoles can not follow frequency from a given value. This decrease in ε´ is related to a peak in the ε´´ vs. Lnf plot. (see Fig. 12) Fig. 13 is an Argand diagram, that is a plot of ε´´ vs. ε´ at different temperatures. It can be seen that the plot does not correspond to a semicircle with center on the x-axis. For this reason, experimental data will be fitted to H-N equation. Real and Imaginary parts of the complex permittivity where plot separately in Figs. 11 and 12. Fitting to HN equation allows calculation of parameters a, b, εu, εr, τ and σ that supply important information about system relaxation.
Figure 13. Argand plot for the thermoset BADGE(n=0)/m-XDA at 134, 154 and 174 ºC. 6. Application of DEA to thermoset cure processes.
Characterization of a cured thermoset by DEA and the information available from this type of measurement, have been reported earlier in this chapter. It must be pointed out the great amount of papers reported[15,16] on the way to optimize DEA data with the objective of obtaining important information on the behaviour of this kind of systems. In our opinion, the study of thermosets during the cure processes[17-23] is, perhaps, more important due to the technological necessity of knowing the fundamental properties of the mixture to optimize the final product. This is the reason why many investigators have focused on the characterization of the cure process through DEA trying to relate dielectric properties to mechanical, rheological and mainly thermodynamical properties. This was facilitated by the increase and improvement of TA equipment and among them, DEA. In our study a DEA 2970 of TA instruments was used. The study of a curing process increases the complexity of the system since dielectric properties depend totally on temperature time and cure degree as well as on form and chemical composition of the substances involved in the mixture. Because of
238
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
this, one the most used experiment is the isothermal (constant temperature). Figs. 14 a)c) are plots of permittivity, loss factor and ionic conductivity as functions of time. They correspond to the isothermal cure of the system DGEBA(n=0)/1,2 DCH at 55 ºC in the frequency range from 10-1 to 105 Hz.
Figure 14. a) ε´ as a function of time for the isothermal (55 ºC) curing process of the system DGEBA/1,2 DCH. Frequencies decrease with increase time.
Figure 14. b) ε´´ as a function of time for the isothermal (55 ºC) curing process of the system DGEBA/1,2 DCH. Frequencies decrease with increase time.
FUNDAMENTALS AND APPLICATIONS OF DEA
239
Figure 14. c) σ as a function of time for the isothermal (55 ºC) curing process of the system DGEBA/1,2 DCH. Frequencies decrease with increase time.
Fig. 14 a) shows that permittivity slightly decrease with time, at short times or high frequencies. This is the so called relaxed permittivity (εr) that can be only observed at high frequencies because at low frequencies, conductivity is high enough to cause electrode polarization[24], characterized by the extremely high values of ε´ at low cure times. εr measures the molecular dipole contribution that decreases with time owing to the chemical changes originated during the curing process. Increasing the curing time, ε´ undergoes a significant decrease originates by the vitrification of the system that make molecular dipoles unable to orientate with the electric field and unable to contribute to the real permittivity of the system. This decrease will take place at shorter times as the measurement frequency increases. Once the cure reaction has proceed, the permittivity tends to a constant value known as the unrelaxed permittivity (εu) to which only atomic and electronic polarizability contribute. The difference εr-εu is known as the dipole strength represented by Δε that is directly related to molecules dipole contribution. Because of this, the variation of Δε with time in isothermal conditions provides information about the changes in molecular dipoles originated by the chemical reactions during the cure process. Even DEA supplies information on dipole density and moment, there are some other techniques that provide more reliable information about the molecular configuration. For instance, infrared spectroscopy is a very strong technique to determine concentration of primary, secondary and tertiary amines during the curing reaction of an epoxy diamine system. A plot of ε´´ vs. t is shown in Fig. 14 b. As it was previously mentioned, there are two phenomena that contribute to ε´´: ionic conductivity and dipole relaxation. In the case of cured thermosets, conductivity is only significant at temperatures considerably above Tg and because of that, does not affect on the observation of that transition. However, in the case of the curing reaction, conductivity is a dominant
240
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
factor, mainly at low curing times and frequencies, that sometimes hides the vitrification process. Fig. 14 b shows also that the loss factor decrease significantly with an increase in curing time. In this zone, the values of ionic conductivity are higher so the conductivity makes an important contribution to ε´´. The marked decrease in ionic conductivity with time is originated by an increase in the viscosity of the system and because of that, in the decreasing of the free volume caused by crosslinking of the system. The time to reach infinite value for viscosity is known as the gel time (tgel). Johari25 proposed an equation that relates ionic conductivity (σ), with cure time (t) ant tgel as follows: x
§t −t · σ = σ 0 ¨ gel ¸ (15) ¸ ¨ t © gel ¹ Where σ0 is the conductivity at t=o and x is a critical exponent depending on the isothermal cure temperature. The value of tgel can be obtained approximately from this equation in an isothermal cure. As the cure time increases, ε´´ increases due to vitrification and the ε´´-t curve presents a maximum at a given time. Same as it happened with ε´, the time corresponding to the maximum of the curves depends on the frequency, with higher values at low frequencies. If we want to obtain the time to vitrification, we have the problem that thus kind of dynamic measurement depends on frequency. Some authors propose that the time to vitrification coincides with the time to reach the maximum of the ε´´ vs. t curve at frequencies in the range from 1 to 3 Hz. Because there are two phenomena that contribute to ε´´, it may happen that at low frequencies, dipole transition is overlapped by the ionic contribution as it happens in Fig. 14b. For this reason, the experimental procedure must be careful, trying to attain maximum resolution for vitrification minimizing the conductivity contribution. As the reaction progress, ε´´ tends to a constant value that depends greatly on the ionic conductivity or the system after being cured. A practical way to check, at a given reaction time, which of the two phenomena, dipole or conductivity, is dominant, is based on the plot of σ (ε´´wε0) versus time at different frequencies (Fig. 14 c) because the conductivity contribution does not depend on the frequency. It can be seen in Fig. 14 c that, at first stages of curing, ionic conductivity is the most important contribution to ε´´ and decreases with time. At higher curing times, σ depends on frequency, thus indicating that the system is in the vitrification zone where dipole relaxations depend on w. At high curing times, the curing reaction is complete for the isothermal temperature and ionic conductivity becomes dominant. There were different attempts to relate dielectric measurements with some other thermal analysis techniques especially with calorimetric measurements as they were frequently used and provide important information. The cure degree α was related to conductivity through the following equation: 1 ∂ log σ = ∂α (16) ∂t ∂t ∂α where 1/σ is the resistivity of the system and , the reaction rate. ∂t
FUNDAMENTALS AND APPLICATIONS OF DEA
241
Maffezzoli[19] proposed an equation that relates α and σ:
log σ 0 − log σ α = α max log σ 0 − log σ ∞
P
§ log σ ∞ · ¨¨ ¸¸ (17) © log σ ¹ is the conversion at the end of the isothermal cure and p is an
where αmáx empiric parameter. Interpretation of experimental data becomes more complicated when introducing a new experimental variable: temperature. When the resin-hardener mixture is subjected to an increase in temperature at a controlled heating rate, both ε´ and ε´´ depend on time, temperature and cure degree. A quantitative analysis of this type of experiments is represented in Fig. 15. This Figure shows the behaviour of a epoxy resin-aliphatic diamine system when mixture is subjected to a controlled heating rate at 5 ºC/min. One of the most useful variables for graphical representation of this kind of experiments is the logarithmic of ionic conductivity. As in the case of isothermal experiments, it allows to differentiate between zones in which conductivity is dominant (and because of this vitrification or glass transition does not exist) and zones in which dipole transitions are dominant.
Figure 15. Plot of σ vs. T for an epoxy-diamine system.
Fig. 15 shows that at low temperatures (from -100 to 0 ºC), ionic conductivity depends on frequency and the glass transition proceed without reaction (Tg0). As the temperature increases, conductivity increases up to a value that corresponds to a minimum in viscosity. It must be pointed that ionic conductivity can be divided into two different contributions: - a thermal component, which increases as temperature increases (simple fluidification). This term σth can be analysed as a thermal agitation contribution on σ; - a curing component, which increases resin molecular weight and then viscosity. By this way, molecular mobility decreases. This term σcure leads to a decrease in σ. Precisely, this second term is the cause of the ionic conductivity decrease when the temperature increase because there is an approach to gelation of the system in wich
242
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
viscosity tends to infinite (see that there is not dipole contribution in the range from 25 to 75 ºC because σ does not depend on frequency). If the temperature goes on increasing, the process becomes again dipole motion controlled. Vitrification takes place and at greater temperatures glass transition. As both processes depend on frequency, at low frequencies they overlap, as it can be seen in Fig. 15. Vitrification an glass transition peaks are clearly differentiated at 105 Hz. A further increase in temperature originates an increase of conductivity in the rubber state of the system. Let us to consider some practical cases of the use of DEA for characterization of polymeric materials. 7. Characterization of Ethylene Vynil Acetate Copolymers by Dielectric Analysis. (Courtesy of TA Instruments)
Ethylene-Vinyl acetate (EVA) is a generic name used to describe a family of thermoplastic polymers ranging from 5 to 50% by weight of vinyl acetate incorporated into an ethylene chain. Increasing the level of vinyl acetate in EVA polymer reduces the overall crystallinity level associated with the polymer, which increases its flexibility and reduces its hardness. A series of EVA polymers, with varying levels of vinyl acetate was analyzed using the Dielectric Analyser 2970 with the ceramic single surface sensor. The EVA polymers contained the following levels, by weigth, of vinyl acetate; 14, 18, 25, 28 and 33%. To directly compare results between samples, therefore, a single frecuency which yields a well-defined loss peak must be chosen. Figure 16 is the comparative data at 1000 Hz. The comparative plots shows that the glass transition temperature is relatively insensitive to the level of vinyl acetate in the EVA polymer. However, the magnitude of the loss transition is very dependent on the concentration of vinyl acetate. As the level of vinyl acetate increases, the magnitude of the loss factor peak increases. This is consistent with the fact that increasing the level of vinyl acetate increases the flexibility associated with the polymer. Hence, a DEA analysis based on calibration with a series of known EVA formulations could be used to follow vinyl acetate content.
Figure 16. Comparative plot of ε´ vs. T for different levels of vinyl acetate.
FUNDAMENTALS AND APPLICATIONS OF DEA
243
8. Characterization of PMMA by Dielectric Analysis. (Courtesy of TA Instruments).
Poly (methyl methacrylate), or PMMA, is a thermoplastic material commonly used in applications where good impact properties are required. The mechanical properties exhibited by PMMA are due, in part, to the occurrence of a secondary transition (beta relaxation) near room temperature. The sintered powder was heated at a rated of 3 ºC/min from -80 to 180 ºC .The following analysis frequencies were utilized: 1, 3, 10, 30, 100, 300, 1000, 3000 and 10000 Hz. Displayed in Figure 17 are the results (loss factor, ε´´) for the PMMA specimen. The beta transition, which is associated with the rotation of the methacrylate group, is observed as a series of peaks between 0 and 100 ºC. The transition is frequency dependent since it is a time-dependent, nonequilibrium event. The alpha or glass transition occurs at approximately 120 ºC. This event is also frequency dependent. The loss factor data reveals that, at the lower frequencies, the two transitions are clearly resolved and two separate peaks are observed. As the applied frequency increases, the resolution between the alpha and beta events decreases. This observed behaviour is expected for two transitions which occur relatively close to one another and have significantly different activation energies.
Figure 17. Plot of ε´´ vs. T for PMMA at different experimental frequencies. References
1. 2. 3. 4. 5.
J.M. Albella, J.M. Martínez: Física de Dieléctricos (Marcombo Boixareu Editores, Barcelona México). P. Debye: Polar Molecules (Chemical Catalog Co., New York 1929) K.S. Cole and R.H. Cole: J. Chem. Phys. Vol. 9 (1941), p. 341 D.W Davidson, R.H.J. Cole: Chem. Phys. Vol. 19 (1951), p. 1484 S. Jr. Havriliak, S.J. Negami: Polymer Sci. Vol 14 (1966),p. 99
244
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
LISARDO NÚÑEZ, CARLOS GRACIA-FERNÁNDEZ, SILVIA GÓMEZ-BARREIRO
S. Jr. Havriliak, S.J. Negami: Polymer Vol 8 (1967),p. 161 S. D. Senturia, N. F. Sheppard: Dielectric Analysis of thermoset cure (SpringerVerlag, Berlin Heidelberg 1986). R. Kohlausch: Anu. Phys. Chem Vol 91 (1854), p. 179 G. Williams, D.C. Watts: Trans Faraday Soc Vol 66 (1970), p. 80 H. Vogel: Physik Z. Vol 22 (1921),p. 645 G.A. Fulcher, J. Am. Cerm. Soc. Vol 8 (1925),p 339 M. L. Williams, R. F. Landel and J. D. Ferry: J. Am. Chem. Soc. Vol 77 (1955),p. 3701 N. F. Sheppard, S. D. Senturia: Journal of Polymer Science Vol 27 (1989),p. 753 J. D. Ferry: Viscoelastic Propierties of Polymers (John Wiley & Sons, New York 1993) M. Matsukawa, H. Okabe, K. Matsushigto: Journal of Applied Polymer Science Vol 50 (1993),p. 67 M. Ochi, M. Yoshzumi, M. Shimbo: Journal of Applied Polymer Science Vol.25 (1987),p. 1817 J. T. Gotro, M. Yandrasits: Polym. Eng. Sci. Vol. 29 (1989),p. 278 J. O. Simpson, Bidstrup : Polym. Mater. Sci. Eng. Vol. 65 (1991),p. 339 A. Maffezolli, A. Trivisano, M. Opalicki, J. Mijovic, J.M. Kenny : J. Mater. Sci Vol. 29 (1994),p. 800 S. Montserrat, F. Roman, P. Colomer ; Polymer, 44, 101, (2003). M.B.M. Mangion, G.P. Johari, J. Polym. Sci.: Polym.Phys. Edn., 28, 1621 (1990). M.B.M. Mangion, G.P. Johari, Macromolecules., 23, 3687 (1990). M.B.M. Mangion, G.P. Johari, J. Polym. Sci.: Polym.Phys. Edn., 29, 437 (1991). M.B.M. Mangion, G.P. Johari, J. Non Crystaline Solids, 133, 921 (1991).
Dielectric Analysis. Experimental Silvia Gómez-Barreiro, Carlos Gracia-Fernández, Lisardo Núñez Regueira Research Group TERBIPROMAT, Departamento Física Aplicada, Universidade de Santiago de Compostela. Av. J. M. Suárez Núñez, 15782 Santiago, Spain
[email protected]
1. Introduction Dielectric analysis is a measuring technique increasing its use day by day and it has in recent years become one of the most interesting techniques to monitor the evolution of physical and chemical properties during processing and utilization of polymer materials [1]. This is due, in part, to the growing usage of speciality materials in the electrical and electronics industry and to the excellent diagnostic properties possessed by dielectric behaviour. But a major influence to the rise in popularity for these studies is the relative simplicity of much of the apparatus required and the ease with which a very wide range of frequencies can be employed [2]. There are a number of electrical properties that can be readily observed as a function of temperature. The most commonly used techniques follow changes in ac or dc conductivity, capacitance or dielectric properties thermally stimulated discharge currents and the emf developed between dissimilar electrodes in contact with the sample, thermo voltaic detection [3]. This chapter will focus on the dielectric analysis techniques, i.e. ac. Dielectric analysis (DEA or DETA), or dielectometry involve determination of the electrical polarization and conduction properties of a sample subjected to a time-varying electric field. It complements the traditional techniques by allowing the scientist to view molecular motion from a different perspective, that is, through changes in electrical properties. Thus provide both thermal and rheological information. In thermal analysis experiments, the DEA heats and/or cools samples in order to identify thermal transitions. For some measurements, it has extremely high sensitive to changes in physical properties, which makes it possible to detect transitions that are not visible by other techniques. For rheological studies, the DEA is particularly effective because it can monitor the movement of ions in a material. A single dielectric test can identify keys events affecting rheological changes: the time and temperature which correspond to minimum viscosity, the onset of flow, onset of cure, and maximum of rate reaction and completion of cure [4]. While the dielectric analysis does not provide absolute values for viscosity, the shape of dielectric curves usually can be correlated directly to the viscometer profiles of curing resins. One of the most important advantages of DEA from other techniques is that dielectric information can be obtained almost instantaneously with only minimal disturbance of the process controlled in real time. We can measure with DEA dielectric properties like permittivity, loss factor and detect the α-transition, secondary transitions (β, γ, etc), rheological phenomena such as viscosity minima, polymerization and cross-linking, segmental mobility, dipolar relaxations and ionic conductivity, vitrification during cure and gelation for thermosetting materials, rate of cure, degree of cure, etc. [4].
246
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
2. Experimental Methods It is a feature of the dielectric technique that measurements can be performed nearly continuously over the frequency range 10-4 to 3x1010 c/s. It exits a particular method for every frequency range. Table 1 summarizes the methods which are employed in particular regions of this large frequency range [2] Table 1 Frequency Range
Method
Remarks
10-4 to 10-1 Hz 10-2 to 102 Hz
d.c Transient measurements Ultra Low frequency Bridge
10 to 107 Hz 105 to 108 Hz
Schering Bridge Transformer Bridge Resonance Circuits
108 to 109 Hz
Coaxial line
109 to 3x1010 Hz
Re-entrant cavity H01n cavity resonator
Good only for low ε´´ values Same as above
Coaxial lines and waveguides
Good for medium and high ε´´ only
Analogous to creep effect Precise determination of ε´ - iε´´ Precise determination of ε´ - iε´´ Upper limit of lumped circuit methods Good only for medium and large ε´´
These methods may be classified into two general groups: “lumped circuit” and “distributed circuit” method. In the “lumped circuit” range, 10-4 to 108 c/s approximately, the experimental technique is designed to measure the equivalent capacitance and resistance at a given frequency. At higher frequencies, the effect of residual inductance in the measuring assembly makes difficult to regard the sample as a resistance-capacitance arrangement. The experimental method for the “distributed circuit” range 108 to 3x1010 c/s is designed to measure the attenuation factor α and the phase factor β at a given frequency. We shall show below how the experimental quantities, resistance and capacitance, and attenuation and phase factor, are related to the complex dielectric constant ε*. A large part of the dielectric work on polymers has been confined to the frequency range 102 to 105 c/s. It is however, essential that as large a frequency range as possible should be covered, since dielectric relaxation curves for polymers are broad and very sensitive to a temperature variation. 2.1.
Distributed circuits 108 to 3x1010 c/s
At frequencies above about 108 c/s, it is extremely difficult to make lumped circuit measurements [2], due to the increasing importance of residual inductance. Methods have been developed which avoid this problem, and are based on the concepts of wave propagation through a magnetic waves along rectangular or cylindrical waveguides or coaxial transmission lines (see Figure 1).
DIELECTRIC ANALYSIS. EXPERIMENTAL
247
Figure 1. 2.2.
Lumped circuits
In the frequency range 10-4 to 108 c/s, it is convenient to regard a polymer sample as being a electrically equivalent to a capacitance Cx with a resistance Rx, at a particular frequency. Both Cx and Rx will in general be frequent dependent. We must now find a relationship between Cx, Rx and the complex dielectric constant ε*. This will be studied further on. The lumped circuit techniques to be described are mainly designed to measure Cx and Rx, at a particular frequency. Cx and Rx may be converted to ε´ and ε´´ values, respectively, knowing C0. It is also useful for characterizing viscoelastic relaxation [5]. The contribution to the total loss arising from a d.c. conductivity process is given by: G 1 (1) = 0 ε 0′′(ω ) = RxωC0 ωC0 ε0´´(ω) can be evaluated in practice by measuring R0 or [Rspec] by a simple direct current method, and using [1] to obtain ε0´´(ω) at any desired value of ω. Some of the different methods arranged according to increasing frequency range are the following: D.C. Transient current method 10-4 to 10-1 c/s, Ultra-low Frequency Bridge 10-2 to 102 c/s, Schering Bridge 10 to 106 c/s, Resonance Circuits 105 TO 108 c/s, Re-entrant cavity 108 TO 109 c/s [2]. 2.3.
D.C. transient current method 10-4 to 10-1 c/s
Figure 2 gives a simple circuit to illustrate the method. Switch (S1, a) is closed, the sample Cx responds to the step voltage V, giving rise to a transient charging current through Cx which is measured by the amplifier circuit. After charging equilibrium has been attained, switch (S1, b) is closed [opening (S1,a)], and the transient discharge current is measured.
248
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
Figure 2. [6] If the sample consisted of a pure capacitance only, there would be no transient current [2]. Since transients are obtained in practice, the dielectric must considered as having a time-dependent resistance associated with it. 2.4.
Ultra –low frequency bridge 10-2 to 102 c/s
The main difficulty in the lower region of this frequency range is that the generator cannot be coupled via transformer to the bridge but must be coupled directly. This was achieved in Scheiber´s [7] design, and, also, a Wagner earth was not required to balance the bridge, which is a great time to consumer at low frequencies. The actual bridge works on the Schering bridge principle and using a substitution method very precise ε*ω measurements are possible on polymer compounds. Other bridges used in this region have been reviewed by Scheiber. The schematic circuit diagram of the Scheiber Bridge is shown in figure 3.
DIELECTRIC ANALYSIS. EXPERIMENTAL
249
Figure 3. The generator is directly coupled to the bridge circuit. Z1, Z2, and ZL are stray impedances within the generator, J1, J2, J3, and J4, are 15 kΩ resistors in the generator. CG is shorted for bridge measurements. The sample Cx, Rx is measured by balancing the bridge with sample “in” using R1 and R2, and Cs. Here R1 is a 100Ω decade resistor, R2 = 106, 107, 108, or 109 Ω interchangeable calibrated resistors. Cs and CB are precision three-terminal variable capacitors (10 to 110 μF). C3 and C4 are 1000 μμF precision two-terminal capacitors. R3 and R4 are matched precision 105 Ω resistors. 2.5.
Schering bridge 10 to 106 c/s
This is the most common method for the measurement of ε* particularly in polymer work. Various designs differing in detail are used, but the basic principle of one of the most commonly used is illustrated in figure 4. This is a simple capacitance bridge [2]. For a sample in arm A, at balance we have (ZAZC)in = (ZBZD)in. Here ZA is the total impedance for arm A, etc. For sample out, only C1 and C4 need be adjusted rebalance the bridge giving (ZAZC)out = (ZBZD)out.
250
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
Figure4. The Schering Bridge is capable of very high accuracy for ε´ and ε´´ and the uncertainty of measurement is often due to the fact that the sample dimensions are not known to the accuracy that C1 and C4 changes can be determined. 2.6.
Resonance circuits 105 to 108 c/s
Above about 106 c/s the effects of stray impedance (particularly inductance) become increasingly significant. The bridge methods cannot be used above about 10 Mc/s for this reason. Various methods have been devised for this range and we shall confine ourselves to the conductance variation resonance method which is probably the most widely used in polymer studies and also gives very precise results. This is illustrated schematically in figure 5. [8]
Figure 5. The resonance circuits is made up of two precision variable capacitors C1 and C2 , inductance L, the sample Cx, Rx, and the voltmeter V. The circuit is brought to resonance using a loosely coupled generator circuit of variable frequency. At resonance the half-width δin of the resonance curve is determined using the micrometer capacitor C2 (≅ 0-8 μμF). The sample is then withdrawn from the resonance circuit and resonance restored by changing C1 only. 2.7.
Re-entrant cavity 108 to 109 c/s
For low loss factor (tan δ ≅ 10-4) a resonant cavity is appropriate in the frequency range 108 to 109 c/s. Figure 6 shows Parry´s [9] apparatus in schematic form. The method is an extension of the Hartshorn-Ward [8] method (1936) to higher frequencies.
DIELECTRIC ANALYSIS. EXPERIMENTAL
251
Figure 6. The sample is placed between the electrodes, and the system is equivalent to a closed coaxial transmission line in which the central conductor in two parts, separated by the sample. The cavity is brought to resonance using frequency as the variable, the resonance being detected using a silicon crystal and loop. In particular, this chapter will be devoted to describe designed by TA Instruments, that is, the Dielectric Analyzer DEA 2970 that works in the frequency range from 0.003 to 100 kHz, thus belonging to the “lumped circuits”. 3. Description 3.1.
Hardware
The DEA 2970 dielectric analyzer is an add-on module for any of the TA Instruments Thermal Analyst Systems [10]. It consists of a sensor and ram/furnace assembly (Figure 7), incorporated in a cabinet which contains the supporting electronics. There are four types of sensors (Figure 9) - ceramic parallel plate, ceramic thin film, ceramic single surface, and remote single surface - which are interchangeable and disposable. The system’s exceptional versatility permits analysis of bulk or surface properties, using milligram or full-size product samples (e.g., in a laboratory oven, a large part in a moulding press or sheets of prepreg in storage). Sensor disposability not only is a convenience and ease-of use feature, but it makes possible the measurement of hard-tohandle samples. The ceramic sensors are mounted in the ram/furnace assembly, which provides all the necessary environmental conditions: controlled heating and cooling, atmosphere, and applied force. The ram, driven by a stepper motor, applies a constant force or maintains constant plate spacing, based on information from a force transducer and a linear variable differential transformer (LVDT). This assures desired electrode spacing and optimum surface contact with the sample. Sensor insertion and removal are quick and easy, requiring no tools, fasteners or soldering [10]. All electrical contacts are made automatically.
252
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
Figure 7. The remote single-surface sensor* consists of a flexible ribbon cable with a microdielectrometer sensor at one end and a connector at the other. The sensor end is designed to be embedded in a sample; the connector end is for attachment to the instrument. The module cabinet contains the electronic circuits and software for experiment control and data handling, a keyboard/display for local control of operation, and a GPIB interface for communication with the controller. The controller is an essential component of the complete DEA system. It is used to program experiments, analyze results, and customize reports. A plotter is required for preparation of hard copy reports. We will see in detail all this components and how it manages 3.2.
Electronics
The heart of the DEA system is its electronic circuitry and software. They implement the theory of the technique and give life to the hardware, making the system effective, practical, accurate, and fast. Results are produced almost instantaneously. One key to the effectiveness of the DEA 2970 is its measurement technique, which avoids the limitations inherent in instruments based on a Wheatstone bridge. This makes possible accurate measurements at low frequencies as well as high frequencies, and contributes to the instrument’s operating speed.
DIELECTRIC ANALYSIS. EXPERIMENTAL
Figure 8.
253
254
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
Precision and accuracy are further assured by complete factory calibration of the measurement electronics. Design of the electronic system is depicted in Figure 3. Components of the system and their functions are: Controller/Analyzer: This is the operator’s primary interface with the instrument. It is used to program experiments and analyze results. The DEA module can be operated from all of the TA Instruments Thermal Analyst Controllers. Module Microprocessor: A microprocessor/computer is the heart of the module’s operating electronics. Located in the DEA module, it controls all instrument functions, including operation of the experiment, mathematical manipulation of data, and communication with the controller [10]. Frequency Generator: The frequency generator synthesizes a specific, highpurity sine-wave signal to establish an electrical field and excite the sample. The computer memory stores a 32K-point sine-wave generation table. Each point is a 16bit number, which gives a signal resolution of 1 part in 64,000. Electrodes: The input frequency signal at a specified voltage is applied to the sample through the input electrode. The output electrode receives the response current from the sample. Response Interface: An electronic interface reads the measured response current generated by the sample, amplifies the signal, and sends it to the A/D converter. It also feeds the signal back to the guard ring on the response electrode (and to the cable shielding) to assure voltage equilibrium with the electrode, and thus prevent current leakage. A/D Converter: The A/D converter transforms the amplified analog signal to a digital format. Digital Signal Processor (DSP): Signals from the ND converter and information about the input voltage are used by the DSP to determine the in-phase and out-of-phase current. The processed phase and gain signals are then sent back to the Module Microprocessor, where they are combined with sample-thickness measurement signals from the LVDT to calculate permittivity ( ´) and loss factor ( ´´). The entire process - from frequency generation to final calculations - takes place almost instantaneously, making meaningful results available in real-time. They can be read on the controller screen or the module display. 4. Principle of Operation A complete dielectric analysis system requires the DEA 2970 module*, a Thermal Analyst controller/analyzer and a plotter for preparation of hard copy reports [11]. 4.1.
Programmer/Controller/Analyzer
The controller/analyzer is used for programming an experiment, performing other operator-programmable control functions, and analyzing data. With it, the operator establishes all conditions and parameters for an experiment, such as method, temperature program, sample spacing, and force. These instructions are transmitted to
DIELECTRIC ANALYSIS. EXPERIMENTAL
255
the module’s operating software, which is resident in a microprocessor located in the electronics base of the DEA module. The microprocessor also contains programming for mathematical calculations, data manipulation, and calibration factors. During an experiment, the module’s keyboard/display unit can be used as a local control centre for controlling the position of the ram, starting or stopping the experiment, or displaying current status. 4.2.
Ram/Furnace Assembly
The furnace (Figures 8 and 10) contains a mica-clad Inconel heater attached to a silver block and surrounded by a channel for liquid-nitrogen cooling to sub-ambient temperatures. The computer controls heating or cooling rate and liquid nitrogen flow. A recess at the bottom of the furnace seats the bottom parallel-plate sensor or ceramic single-surface sensor. A metal drip pan is used as a furnace liner to prevent contamination and assure easy cleanup. The rams are plug-in, modular devices. Two different rams are offered, one for use with the parallel-plate sensor and thin film sensor (Figure 8), the other for the single-surface sensor (Figure 10). Both contain spring-loaded probes to make electrical contact with the sensors positioned on the surface of the furnace cavity. The parallelplate ram also seats and provides electrical contacts for the top parallel-plate sensor. A cylindrical plunger connected to an LVDT measures sample thickness during the experiment. The ram assembly is secured to atop plate, which in turn is attached to three metal posts connected to the ram motor, located under the furnace. Ram operation is controlled by the operating software, and uses inputs from the force transducer and LVDT to monitor applied force and sample thickness. The operator can program for limits based on minimum plate spacing and/or maximum force. By monitoring these variables, it is possible to obtain accurate test data on a sample even after it has undergone dramatic changes in physical form, such as melting or curing. Ram covers are provided to protect rams from contamination by samples and to help make cleanup easy. 4.3.
Electrodes/Sensors
In a dielectric analysis experiment, a sample is placed in contact with electrodes and subjected to an applied sinusoidal voltage. Sample response is measured as a function of time, temperature, and frequency. The electrode assemblies serve two purposes: transmitting the applied voltage to the sample, and sensing the response signals. The different geometries of the sensors make possible the measurement of bulk or surface properties for a wide variety of solid, paste, and liquid materials.
256
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
Figure 9. The two electrode geometries commonly found in dielectric analysis are the parallel plate capacitor and the interdigitated (or comb) electrode. But this equipment has four types of electrode/sensors: Ceramic Parallel Plate, Ceramic Thin Film, Ceramic Single Surface and Remote Single Surface. The interchangeable DEA sensors are the key to the DEA system; they provide precise measurement in bench top analysis of bulk sample properties and sample surface properties [11]. Figure 9 shows the four types of sensors. Each of them measures in a different mode, as we detail next. 4.3.1.
Parallel plate
The parallel plate sensor is used to evaluate bulk dielectric properties in a material, and to track molecular relaxations [10].
DIELECTRIC ANALYSIS. EXPERIMENTAL
257
Simples to be measured with this sensor must be 2.5 mm in width and 2.5 mm in length. Maximum and minimum spacing are 0.75 mm and 0.125 respectively. It will be seen further on the type of simples that can be measured by this sensor. As it was mentioned previously the measurements are performed in volume, as the applied electric field crosses the whole sample and because of this the measurement is not surface one. This sensor is shown in Figure 10.
Figure 10. As it can be seen, the voltage is applied in the bottom, and crosses the simple being the output electric current measured by the upper sensor where is converted to an output voltage that is amplified. A platinum resistance temperature detector (RTD) surrounds he perimeter of the gold electrode and measures the temperature of the sample. The temperature is controlled directly by the RTD. A guard ring around the perimeter of the upper electrode corrects for electric field fringing and for stray capacitance at the edge of the plates. Signal circuits are connected through pads on the lower sensor, which contact spring probes attached to the ram. When parallel plates are used, these are calibrated by making a capacitance measurement in a dry nitrogen atmosphere [10]. The simple is placed between the two sensor plates alter making this capacitance measurement. The stepper-motor then drives the sensors together to a pre-selected plate spacing or force setting. The plate spacing (sample thickness) recorded at the start of the method is used throughout the experiment in the calculation of ε´ and ε´´. 4.3.2.
Ceramic Single Surface
The ceramic single surface sensor, based on a coplanar interdigitated-comb electrode design, is used for surface property evaluations and curing experiments, and is
258
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
ideal for liquid samples. The assembly is composed of a ceramic substrate, metal ground plate, and high temperature insulating layer, electrode arrays, platinum resistance temperature detector (RTD), and electrical contact pads. The temperature is controlled directly by the RTD. The sensor is placed at the bottom of the oven and the sample positioned on its top surface. Ram pressure assures intimate sample/electrode contact. Spring probes attached to the ram make contact with pads on the sensor, completing the signal circuits (see Figure 11).
Figure 11. When operating the DEA in the ceramic single surface mode, the sensor is calibrated by making a capacitance measurement in a dry nitrogen atmosphere. The sample is loaded onto the sensor after making this capacitance measurement. The stepper-motor then drives the ram toward the sensor to a pre-selected thickness or force setting. ε´ and ε´´ are calculated from the current and phase data using a calibration table stored in the instrument memory. 4.3.3.
Sputter-coated sensor
Sputter-coated measurements are used to evaluate bulk dielectric properties in a thin film material. A metallic electrode is sputter coated, under vacuum, directly onto the sample surface to improve sample/measurement electrode contact. The lower electrode, positioned on the surface on the furnace, is a contact pad that sets up the electrical field and makes contact with the electrode surface sputtered onto the sample. A platinum resistance temperature detector (RTD) surrounds the perimeter of the gold electrode and measures the temperature of the sample. The temperature is controlled directly by the RTD.
DIELECTRIC ANALYSIS. EXPERIMENTAL
259
Figure 12. The upper electrode, attached to the face of the ram, also acts as a contact pad to make contact with the electrode surface sputtered on the sample. It measures the generated current, which is then converted to an output voltage and amplified. Signal circuits are connected through the pads on the lower sensor, which contact spring probes attached to the ram. The plate spacing (sample thickness) is measured when the ram closes. This can be changed before the experiment is started, and is used throughout the experiment is started, and is used throughout the experiment in the calculation of ε´ and ε´´. 4.3.4.
Remote Single Surface Sensor
The remote single surface sensor is used for surface property evaluations and curing experiments. In addition, because of the flexible design and ribbon-cable leads, it can be embedded in a sample of any size for product development. Applications include monitoring dielectric properties of a polymer during moulding, or while exposed to adverse environments such as solvents or ultraviolet light. It is also possible to embed the sensor in full-sized prototype products during development for a long-term test of end-use performance or stability and heat history during storage. In this mode, the sample is returned periodically to the instrument for evaluation. The coplanar interdigitated-comb design of the electrodes is similar to that of the ceramic single surface sensor, but the sensing area is considerably smaller. It uses coplanar, interdigitated-comb electrodes with the electrode array vapour-deposited on a silicon substrate, supported by a carrier of polyamide film and connected to conductors in the ribbon cable. The connector end of the ribbon cable is plugged into an interface box, which is connected to the front of the instrument. The flexibility of the cable and small sensor size, together with the use of a signal amplifier in the integrated circuit adjacent to the sensor array, allows the sensor to monitor a sample up to 10 feet away from the instrument. Sample temperature is measured by a thermal diode adjacent to the sensing array. ε´ and ε´´ are calculated from the current and phase data using a calibration table stored in the instrument memory [11].
260
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
Dielectric measurements are very sensitive to moisture. We have to keep all the sensors in a desiccator.
Figure 13. Looking at the components of the measuring system, and the functions they develop, we can get an idea of the parameters to control and modify. These parameters are: Type of sensor: to determine which sensor to use for a particular experiment, we will need to consider two factors: the simple to be analysed and the experimental conditions. Table 2 records broadly the different types of measurements and the sensor to be used in every case.
261
DIELECTRIC ANALYSIS. EXPERIMENTAL
Table 2 SENSOR
TYPE OF MEASUREMENT
Parallel Plate
This sensor is used to evaluate volumetric (or bulk) dielectric properties To be used to evaluate dielectric properties in thin films of material. This sensor is used to evaluate surface properties and cure experiments. Also it is used for liquid samples. This sensor is a flexible integrated circuit sensor to be used during the cure of a material.
Sputter Coated Ceramic Single Surface Remote Single Surface
For the type of material must be taken in account for a correct choice of the sensor to be used. Table 3 provides some examples. Table 3 [11] SAMPLE Thermoset Liquid or paste
EXPERIMENTAL CONDITIONS
SENSOR Ceramic single surface
Liquid or paste
Analysis during cure is controlled thermal history Analysis in prototype mold or external oven
Cured film
Post-cure analysis
Parallel plate
Temperature/frequency analysis
Parallel plate
Thermoplastic Film Thin film Liquid paint
Remote single surface
Sputter coated Analysis during drying or curing controlled thermal history Stability during storage and shipment
Ceramic single surface
Organic liquid Low molecular weight
Ambient temperature measurements
Ceramic single surface or Parallel plate
Low molecular weight (oil)
Temperature/frequency transition analysis
Ceramic single surface
Maturation/thickening analysis
Remote single surface
Cure analysis in a controlled thermal history
Ceramic single surface
Cure analysis in prototype development mold
Remote single surface
Temperature/frequency transition analysis
Parallel plate
Analysis during cure
Parallel plate or Ceramic single surface
Sheet molding compound
Elastomer Cured film Unvolcanized
Remote single surface
262
SILVIA GÓMEZ-BARREIRO, CARLOS GRACIA-FERNÁNDEZ, LISARDO NÚÑEZ REGUEIRA
The maximum stress must also be selected. It must be taken into account that the application of an adequate stress ensures a good contact between sensor and sample. The stress range in DEA 2970 goes from 0 to 500 N. It is recommended to apply 300 N for rigid and semi-rigid films, and 500 N for pulverizations. However, the stress to be applied depends also on the type of sensor used. Also it must be considered possible changes of the physical state during measurement (i.e., to go through the melting point, curing of the sample, or ionic pulverized samples) because in these cases the stress to be applied should be lower. One other parameter to account for is the minimum spacing (mm) between upper and the lower sensors. This limit must be imposed to prevent liquid or soft samples leakage out of the sensor area during the experiment. These parameters can stop ram motion during the experiment. Because of this the stress applied to a sample must be lower than the maximum stress selected if the gap between the ram and the lower sensor is similar or less than the specified value for the minimum spacing. In other words, the gap must be greater than the specified minimum spacing when the measured stress is higher than the maximum stress selected. For example to analyze rigid or semi-rigid samples at room temperatures, it is recommended a minimum spacing 100 times greater than the sample thickness measured at room temperature. To analyze soft or elastic sample at room temperature, minimum spacing should be 90 % of the sample width at room temperature. The spacing for a soft sample paste must be verified after the material solidifies at lower temperatures, thus to compensate for thermal contraction of the sample. To study epoxy resins and liquid samples using the single surface sensor, minimum spacing should be 2.5 mm. To design an experiment it is advisable to control some others parameters such as: purge gas, purge flow, temperature range, frequency, time length of the experiment, etc. 5. How to run a DEA experiment? In the first place, we have to check the correct performance of the measurement equipment. To keep the dielectric analyzer working to the highest level of performance possible, it is important calibrate it properly. Electronic calibration is done to calibrate the DEA analog board. This type of calibration must be periodically controlled, mainly when laboratory conditions substantially change (temperature, humidity, etc). In the second place, the type of simples to be analyzed should be considered together with the experiment conditions. By doing so, we can select the type of sensor, minimum and maximum spacing, stress to be applied, possible use of liquid nitrogen (if we work at sub-ambient temperature), purgue gas, or the possible use or more than one. This group constitutes the so called experimental parameters, that is, the parameters that the equipment needs for a correct performance. Next step is the design of the operating method or experiment. To operate the DEA 2970 system in experiments at temperature changing at a constant rate, frequencies must be chosen in advance. In this particular measuring system, 28 different values can be chosen, thus estimating the scanning time very useful for the choice of the heating rate.
DIELECTRIC ANALYSIS. EXPERIMENTAL
263
Once selected the frequency table, the operating method can be designed amongst a great number of options that depend on the results we are looking for. Once at this point, the type of experiment to analyze must be very clear. They can be sorted into: - Curing simple experiments - Solid or cured sample experiments. In each case, there are two kinas of experiments: - Isothermal curing experiments, i.e. to keep the simples during a given time at constant temperature as a function of time. - Dynamic curing, where the sample is subjected to a constant heating (or cooling) rate, or step heating rate, etc. Isochrone experiments can also be carried out (In which the temperature is modified at constant frequencies) or isothermal experiments carried out doing a frequency scanning at a given temperature. These are the most common type of experiments that supply important information about the studied system. There are a lot of different modes to be used. Among these: Jump, Equilibrate, Initial temperature, Ramp, Isothermal, Step, Increment, Repeat, Data storage (on/of), Frequency sweep, etc. Once selected all these steps, the experiment can be started. First step is the calibration of the sensor (without sample and previously subjected to a gas purge) in which temperature data and sensor geometry are recorded. Once verified temperature and geometry, the sample is placed and purge to minimize the non desired humidity is set up. In this moment the measuring operation can start. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
A. M. Maffezzoli; L. Peterson; and J. C. Seferis; “Polymer Engineering and Science”, 1993, Vol. 33, No. 2. N. G. McCrum; B. E. Read; G. Williams; “Anelastic and Dielectric Effects in Polymer Solids” John Wiley & Sons; London 1967 Jonscher, A. K. “Dielectric Relaxations in Solids”, Chelsea Dielectrics Press, London, 1983 E.A. Turi, ”Thermal Characterization of Polymeric Materials”, Academic Press, Inc, San Diego, 1997. Stephen Havriliak, Jr.; Stephen Havriliak. “Dielectric and Mechanical Relaxation in Materials”; Hanser Publishers, Munich, 1997 Williams, G.; “Polymer”; 4; 27 Scheiber D. J. “J. Res. Nat. Bur. Stds.” Washington, 65c, 23 Hartshorn L. and W. H. Ward; “J. Inst. Elec. Engrs”. 79, 59, 1936 Parry J. V. L. “Proc. Inst. Elec. Engrs.”; Pt. III, 303 TA Instruments, “Universal Analysis: Operator’s Manual”, 1999. TA Instruments, “Thermal Solutions: User Reference Guide”, 1999.
This page intentionally left blank
Statistical Applications to Thermal Analysis Ricardo Cao and Salvador Naya Departament of Mathematics. Universidade da Coruña
[email protected] 1. Application of nonparametric regression methods 1.1.
Introduction
An important topic in thermal analysis is the statistical analysis. There are several works in the thermal analysis literature that use regression models to account for the relationship between the variables of interest in this field. Many of them are based on the Arrhenius equation modified by Sestak and Berggren [27] and were discussed by Vyazovkin [29] and compared by many authors [3]. The response variable is often heat flow or sample mass along the experiment, while typical explanatory variables are temperature or time. Some important properties of the materials can be directly measured or easily calculated from the response variables. They include characteristic temperatures of different processes, i. e. melting and glass transition temperatures, thermal stability, specific heat at different temperatures, enthalpy associated to chemical reactions and physical changes. In addition, kinetic analysis of the processes can be performed from the thermal analysis data. The study of these data gives useful insight for materials characterization. It is relevant to point out that the estimation of the first two derivatives is also an important issue. In the case of weight loss processes, the TGA first derivative (DTG) can be compared to the DSC trace. The DTG trace is sharper and more accurate to detect the onset and end points of the processes. It is especially interesting when studying overlapped processes. This higher quality of DTG compared to DSC comes from two differences between the both techniques: 1. The TGA response is almost instantaneous and immediately reflects the weight changes, while the DSC signal is affected by a thermal lag (the heat from the sample takes some time to travel through the crucible to the detector). 2. The heat diffusion in the crucible smoothes the signal before reaching the detector. TGA and DSC, therefore, give direct mass and calorimetric measurements for whose a good estimation accuracy is desired. The main aim of this work is to accurately estimate the functional relationship between an explanatory variable X, typically time or temperature, and a response variable Y, often weight (for TGA curves) or heat flow (for DSC curves). The following nonparametric regression model is assumed to hold: Yi = m( X i ) + ε i , i = 1,2, , n. with E (ε i ) = 0.
(1)
where m is the regression function of Y given X and ε i is a term accounting for the measurement error (for instance that of the calorimeter). Throughout the paper it will be assumed that the design is fixed (most of the time the X i are equally spaced in practice), and the error is homoscedastic, i. e., Var (ε i ) = σ 2 for i=1,2,…,n.
266
RICARDO CAO AND SALVADOR NAYA
The methods used in practice to smooth DSC or TGA curves by means of nonparametric weights do not incorporate any automatic optimal smoothing parameter selection. In some cases they are even based in moving average procedures, going back to the early work by Savitzky and Golay [25]. For this reason the bad fitting is very evident in many cases, specially in the first and second derivative estimation. calcium oxalate_1 0.5
8.0
7.0
)
0.0
5.0
Weight ( [mg]
-1.0
)
d(Weight)/d(time) ( []
6.0 -0.5
4.0
-1.5 3.0
-2.0 0.0
10.0
20.0
30.0
40.0
50.0
2.0 60.0
t ime [min]
Figure 1. TGA curve for the calcium oxalate sample (dashed line) and first derivative (solid line) using the RSI Orchestrator Figure 1 shows a fit of a TGA curve of calcium oxalato and its first derivative using one of the standard computer packages in this field, the Orchestrator® by Rheometric Scientific Incorporation®. The smoothing software incorporated to this package enables selection of the amount of smoothing "by hand" but not any automatic estimated optimal smoothing parameter that takes into account the non negligible error dependence. The aim of this paper is precisely to provide an automatic selection of the amount of smoothing to be used in these contexts. 1.2.
Technical background
Nonparametric regression methods will be used to estimate the function m without specifying any a priori parametric model. The key idea is to assume that m is a smooth function and approximate m(x) by averaging the Y-observations in a neighbourhood of x: m( x ) =
1 n ¦Wni ( x)Yi , with n i =1
n
¦W
ni
( x) = 1.
(2)
i =1
where Wni (x) is the weight that the i-th observation gives to the point x. Typically these weights depend on some smoothing parameter h and some kernel function K. The choice of the smoothing parameter is crucial since it controls the amount of smoothing used in the estimation. Among the great deal of nonparametric estimators for m we
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
267
mention the Nadaraya-Watson estimator (see [18]), Priestley-Chao estimator (see [21]), Gasser-Müller estimator (see [16]) and the local polynomial estimator (see [9]). Since our aim is to estimate the regression function as well as its first two derivatives it is very natural to use local polynomial estimators, which additionally have good properties for estimating at the boundary. 1.3.
Local polynomial estimator
The local polynomial estimator was introduced by Stone [28] and Cleveland [5] but it has not been extensively used until the ninenties, after publication of the papers by Ruppert and Wand [23] and Fan and Gijbels [9]. The idea behind the local polynomial regression estimator is to use weighted least squares to perform a local fit to a polynomial of degree specified in advance. More precisely the regression function (j=0) and its derivatives (j=1,2,…,p) at a given point x are estimated by mˆ ( j ) ( x) = j! β j ( x) j = 0,1,2,…,p, (3) where
β = (β 0 , β 1 ,, β p ) = arg min (Y − Xβ )t W (Y − Xβ ) β
(4)
§1 ( X 1 − x ) ( X 1 − x ) p · § Y1 · ¨ ¸ ¨ ¸ X = ¨ ¸, Y = ¨ ¸ ¨ p ¸ ¨Y ¸ © n¹ ©1 ( X n − x ) ( X n − x ) ¹ and W = diag{K h ( xi − x )} is the n×n matrix that contains the weights that every datum in the sample gives to the point of interest. An explicit expression for the vector ȕ is:
β = (X tWX ) X tWY . −1
1.3.1.
(5)
Practical choice of the kernel and the order of the local polynomial
In order to use the local polynomial estimator we need to choose the kernel function, K, the degree of the polynomial, p, and the bandwidth, h. The choice of K and p is of secondary importance with respect to the smoothing parameter h. Fan and Gijbels [10] recommend using the Epanechnikov kernel, since it minimizes the asymptotic mean squared error for an optimal bandwidth. They also suggest to choose p as any integer larger than the order of derivative of interest, j, such that p-j is odd. For instance we could take p=1 for estimating the regression function itself, while p=3 could be used for estimating the second derivative of the regression function. In general, based on the bias decreasing and variance increasing with p, an advisable practical choice is to set p-j=1,3. Some drawback of the local polynomial estimator is that its conditional variance tends to infinity when the neighbourhood of the interest point (using a compact support kernel) contains no more than p+3 data points. This problem, pointed out by Seifert and
268
RICARDO CAO AND SALVADOR NAYA
Gasser [26], can be solved performing a local increasing of the smoothing parameter whenever it occurs. For dependent data, as those we are dealing with, the classical local polynomial regression estimator can still be used, although its asymptotic mean squared error depends now on the sum of covariances of the error process. An alternative approach has been proposed by Francisco and Vilar-Fernández [13], by using generalized least squares ideas to account for the dependence structure. 1.3.2.
Bandwidth selection criteria
Typical bandwidth selection procedures are based on minimizing the empirical version of some criterion that accounts for the error between the nonparametric Ȟ-th derivative regression estimation and its underlying counterpart. For instance, using the mean squared error at a given point x: MSE x (h) = E (mˆ h ( x) − m( x) ) , 2
(6)
we obtain local optimal bandwidths. Global criteria, as the MISE, can be obtained by considering global distances between the estimator and the true curve. Most of the times these measures can be written as integrated versions of the some local criterion (Eq. 6). For instance the mean integrated squared error can be written as: MISE x (h) = E ³ (mˆ h ( x) − m( x) ) w( x)dx, 2
(7)
for some weight function w. This measure can be easily decomposed as a sum of the integrated variance and the integrated squared bias. Under independence in the error structure and assuming that Ȟ+p is odd, Fan and Gijbels [10] give asymptotic expressions for the bias and the variance of the local polynomial estimator:
(
)
Bias mυ ( x ) = hnp +1−υ
(
)
Var mυ ( x) =
m p +1 ( x) υ! Bυ (1 + o(1) ), υ = 0,1, , p, ( p + 1)!
1 nhn2υ +1
c(ε ) f ( x)
(υ!)2 Vυ (1 + o(1)), υ = 0,1,, p,
(8)
where, in (8), f is the design density and the values Bȣ and Vȣ depend on the kernel function (see Ruppert, Sheather and Wand [24] for details). Using the smoothing parameter minimizing the asymptotic expression of MISE can be easily found to be: 1
§ · 2 p +3 σ2 ¨ ¸ h AMISE = Cυ , p ( K )¨ 2 ¨ n ³ m p +1 ( x) w( x) f ( x)dx ¸¸ © ¹ where the constants Cυ , p ( K ) can be computed as follows:
(
)
(9)
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
269
1
ª ( p + 1)!2 (2υ + 1) K *2 (t )dt º 2 p +3 ³ υ » Cυ , p ( K ) = « « 2( p + 1 − υ ) t p +1 K * (t )dt 2 » υ »¼ «¬ ³
{
with K υ* (t ) =
(¦
p l =0
}
)
S υl t l K (t ) , S υl are the elements of the matrix S −1 and S = (μ j + l ) j ,l =0 p
with μ j = ³ u j K (u )du . In the dependent error case similar formulas can be obtained based on parallel expressions to bias and varianza (Eq. 8). For the asymptotic mean integrated squared error (see Francisco and Vilar-Fernández [13] for details) the following expression gives some approximation of a reasonable criterion to select h. Therefore, an asymptotically optimal bandwidth (in the sense of AMSE) to estimate the ȣ-th derivative of the regression function is: 1
hopt , L
· 2 p +3 § c(ε ) ¸ = Cυ , p ( K )¨ ¨ n m p +1 ( x) 2 w( x) f ( x) ¸ ¹ ©
(
)
1
(10)
· 2 p +3 § c(ε ) ¸ ¨ hopt ,G = Cυ , p ( K )¨ 2 ¨ n ³ m p +1 ( x) w( x) f ( x)dx ¸¸ ¹ © where c(ε ) = ¦ c(k ) and c(k) is the lag k autocovariance of the errors ε i .
(
)
The previous formulas are valid if ȣ+p is odd. If ȣ+p is even the expressions for the optimal bandwidths are 1
hopt , L
· 2 p +5 § c(ε ) ¸ = Cυ , p ( K )¨ 2 ¨ n m p + 2 ( x) w( x) f ( x) ¸ ¹ ©
hopt ,G
· 2 p +5 § c(ε ) ¸ ¨ = Cυ , p ( K )¨ 2 ¨ n ³ m p + 2 ( x) w( x) f ( x)dx ¸¸ ¹ ©
(
)
1
1.3.3.
(
(11)
)
Two-stage plug-in bandwidth selector
Some of the most popular procedures for bandwidth selection in nonparametric curve estimation are the plug-in methods. These methods estimate the minimizer of either the AMSE or AMISE. For the local polynomial estimator under dependence, the plug-in local and global bandwidth selectors are some estimators of expressions (Eq. 10 and 11). Therefore some estimators of c(ε ) , the sum of autocovariances, and the (p+1)th derivative of the regression function are needed. Estimating the autocovariances sum
Although c(ε ) can be estimated through the spectral density of the ε i our approach will be somewhat simpler. Let us assume that the ε i follow an autoregressive
270
RICARDO CAO AND SALVADOR NAYA
process of order 1 (AR(1)) with first order autocorrelation ȡ. Then c(ε ) can be written in terms of the error variance and the autocorrelation coefficient: ∞
∞
k = −∞
k = −∞
c(ε ) =
¦ c( k ) = ¦ σ
2
ρ (k ) = σ 2
1+ ρ 1− ρ
(12)
We now compute the residuals: εˆi = Yi − mˆ h ( xi ) , using some preliminary 1 n 2 smoothing parameter h, and then find an estimator for the variance σ 2 = ¦ (εˆi − ε ) , n i =1 1 n with ε = ¦ εˆi and an estimator of the first lag autocorrelation coefficient: n i =1 n −1
ρˆ =
¦ (εˆ
− ε )(εˆi +1 − ε )
i
i =1
n
¦ (εˆ i =1
i
−ε )
(13) 2
Then c(ε ) = σˆ 2
1 + ρˆ . 1 − ρˆ
Pilot bandwidths choice
The plug-in method requires to estimate the unknown quantities in (Eq. 10 and 11) and by some values hPI , L and hPI ,G . For a fixed integer, Ȟ, the method is used to estimate mυ using local polynomials of degree p. Estimation of c(ε ) , already considered in the previous subsection, requires the choice of a preliminary bandwidth h1 , needed to compute the residuals. Plug-in bandwidth selectors also need to estimate the (p+1)-th derivative of the regression function, m. This may be done, once more, by means of local polynomial fitting for estimating the Ȟ derivative (Ȟ=p+1) using local polynomials of degree p+2. This requires the choice of a preliminary pilot bandwidth, h2(1) . To determine some automatic method for selecting h2(1) we face similar problems when looking at the expression for the optimal (local or global) smoothing parameter in this context. More specifically, there are two unknown terms to be estimated: c(ε ) already considered above, and the (p+1)-th derivative of m. The idea behind the two-stage plug-in method is to propose some prepilot bandwidth, h2( 0 ) , by looking at the expression for the asymptotically optimal bandwidth for this new problem:
h2( 0) = C 2 δn
−
1 2 p +7
(14)
where į is some estimator of the scale and C2 is some constant that does not depend on the data. Since in the thermogravimetric experiments the design is equispaced, or nearly
271
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
equispaced, we made the choice δ = ( xn − x1 ) /(n − 1) . The value of C2 has been adjusted by some heuristic approach to be detailed later. Parallel problems appear when selecting h1 . In practice we used a local linear estimator and a single-stage plug-in procedure leading to:
h1 = C1δn
−
1 5
(15)
for some constant C1 that has been obtained by heuristic arguments. In order to obtain some practical value for the constants C1 and C2 we use a calibration sample of a DSC curve. This sample of n=950 equally spaced data corresponds to calcium oxalate monohydrate. Using the initial bandwidth h1 =6 to compute the residuals for estimating the autocovariances sum, we have selected several possible values for the prepilot bandwidth h2( 0) for which the final bandwidths of the two-stage plug-in procedure have been computed. The results are collected in Table 1. This table shows how the sensitivity of hPI to the choice of the prepilot bandwidth, h2( 0 ) , is very low. When estimating the regression function, a factor of 10 in the prepilot bandwidth gives a factor of 4 in the pilot bandwidth and finally a factor of 1.5 in the plug-in bandwidth. For estimating the first and second derivatives, the plug-in bandwidth selector is rather stable with respect to the choice of the prepilot bandwidth, although not so much as for estimating m. Direct inspection of the results obtained (not reported here) show that h1 =6 is a reasonable choice. On the other hand, the values h2( 0 ) =30, for m, h2( 0 ) =28, for mƍ and h2( 0) =30, for mƍƍ seem to be reasonable choices for the prepilot bandwidth h2( 0 )
Table 1. Pilot and final bandwidths of the two stage global plug-in procedure for estimating m, Ȟ=0,1,2. m h2( 0 ) h2(1) υ = 2, p = 3 10 6.860839 20 9.592215 30 11.928195 40 14.038808 50 16.379445 100 26.760712
m' h2(1) υ = 0, p = 1 υ = 3, p = 4 1.01116 4.56602 1.07833 7.61298 1.13428 9.67675 1.18431 11.5646 1.23940 12.8962 1.47904 22.5909
m hPI ,G
m' hPI ,G
m' ' h2(1) υ = 3, p = 4 υ = 4, p = 5 2.6356 4.3589 3.2715 9.0604 3.5414 11.9025 3.7621 13.7104 3.9113 16.0595 4.9377 28.3775
m' ' hPI ,G
υ = 2, p = 3 2.7163 4.6653 5.3950 5.7591 6.2005 8.3594
Having these bandwidths in mind Table 2 contains the proposed choices for the pilot bandwidth h1 and the prepilot bandwidth h2( 0 ) .
272
RICARDO CAO AND SALVADOR NAYA
Table 2. Values suggested for C1 , C2 , h1 and h2( 0) for estimating mυ . Ȟ C1 C2 h1
h2( 0) 1.3.4.
0 24 64 C1δn
1 24 52 −
C2δn
1 5
−
1 9
C1δn
2 24 51 −
C2δn
1 5
−
C1δn
1 11
−
C2δn
1 5
−
1 13
Computational issues
One of the problems that may appear in practice when using the local polynomial estimator is the fact that the matrix X tWX is singular or close to be singular. This occurs very often when the kernel has compact support, the design is equispaced and the bandwidth is small. Let consider a fixed point, x0 , where the estimation will be performed using a bandwidth h. Assume that the support of the kernel is [-1,1], then only the xi 's falling in the interval [ x0 -h, x0 +h] will be used to obtain the value of the estimator at x0 . Seifert and Gasser [26] have studied the case det( X tWX )=0 as well as conditions for finite variance of the local polynomial estimator reaching to the following conclusions. 1. Estimation at the point x0 requires, at least, p+1 points in the interval [ x0 -h, x0 +h]. This condition is more and more restrictive as p increases. 2. In order to warranty that the variance of the estimator is finite, at least p+3 data points should fall within the interval [ x0 -h, x0 +h]. For both reasons whenever the final two-stage plug-in bandwidth or any auxiliary bandwidth is not large enough such that the interval interval [ x0 -h, x0 +h] contains p+3 points, the bandwidth is increased up to a value that meets this condition. Along this unit, both the plug-in local and global bandwidth selectors have been considered. However, sometimes the global bandwidth may suffer of numerical problems, as those mentioned above, in the boundary of the support. In such cases, the global bandwidth has turned to be a local one in the boundary. In principle, the local plug-in bandwidth seems to be a more accurate smoothing parameter for estimating the regression derivatives in a grid of points. However it is clear that the algorithm becomes computationally much more time consuming. It is evident that using a single smoothing parameter instead of a different one for every point in a grid makes a difference in terms of computations. However, there are some aspects that make the global bandwidth algorithm even much more efficient
for an equispaced design. In that case, the matrix (X tWX ) does not change when the estimator is evaluated at any design point, x, such that x0 <x-h and x+h< xn . The reason is that the distances between x and the xi 's falling within the interval [x-h,x+h] do not −1
change when x ∈ ( x1 + h, xn − h ) . This means that the matrix (X tWX ) used to compute −1
the local polynomial estimator at xi does not change for i=Ɛ,Ɛ+1,…,u-1,u where
273
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
(
)
l = [h / δ ] + 2 and u = n − [h / δ ] − 1 . For those i outside this range the matrix X tWX has to be explicitly computed at every different point. In order to save calculations for computing the estimators at the point x = x p , let −1
us write the (i,j)-th element of the matrix (X tWX ) :
(X WX )
= ¦ (x r − x p ) Wr( p ) (x r − x p ) =¦ (x r − x p ) Wr( p ) n
t
i, j
i
j
r =1
n
i+ j
(16)
r =1
where Wr( p ) = K h (xr − x p ) is the r-th diagonal element of W. Using the fact that
K(u)=0 ∀u ∈ [− 1,1] we have that xr − x p h
> 1 Wr( p ) = 0
or equivalently: ª ªhº ª h ºº Wr( p ) ≠ 0 = x r − x p ∈ [− h, h] ⇔ r ∈ « p − « », p + « » » δ ¬δ ¼¼ ¬ ¼ ¬ ªhº ªhº By defining the indices s1 = p − « » and s 2 = p + « » we find a faster to δ ¬ ¼ ¬δ ¼
evaluate expression for the (i,j)-th element of the matrix (X tWX ) :
¦ (x s2
r = s1
− xp ) W i+ j
r
( p) r
=
δ i+ j
[hδ ]
§ kδ · k i+ j K ¨ ¸ h k = − [h ] © h ¹ δ
¦
It is clear that these implementation reduces the number of calculations for computing the estimator at a given point from O(n) to O((h/į)). This reduction is specially important for moderate bandwidths. In practice, for many of the thermogravimetric data sets we used, the computer time could be reduced by a factor of 10 to 20. 1.4.
Conclusions
In this section we include the results obtained using the local polynomial estimator with two-stage plug-in bandwidth with covariances sum estimated for a sample of calcium oxalate. For comparison purposes we show the results obtained, for the same sample, using one of the smoothing routines incorporated to one of the standard software packages in calorimetry, the RSI Orchestrator. This adaptive smoothing method was thought for TGA experiments at constant heating rate, but should perform identically well in curves from other thermal analysis experiments where the explanatory variable is time (for cases of constant heating rate or isothermal experiments) or temperature (in the case of constant heating rate).
274
RICARDO CAO AND SALVADOR NAYA
calcium oxalate_1 0.2
8.0
0.0 7.0 -0.2
6.0
)
-0.4
D_1W/dt ( []
5.0
Weight ( [mg]
-0.6
-0.8
)
-1.0
4.0
-1.2 3.0 -1.4
-1.6 0.0
10.0
20.0
30.0
40.0
50.0
2.0 60.0
t ime [min]
Figure 2. The automatic smoothing obtained using the two-stage global plug-in bandwidth. 2. Kinetic study using the logistic model regression 2.1.
Introduction
TG is widely used to determine kinetic parameters for polymer decomposition. Both isothermal and dynamic heating experiments can be used to evaluate kinetic parameters. Each has advantages and disadvantages. In dynamic thermogravimetric analysis (TGA), the mass of the sample is continuously monitored while the sample is subjected, in a controlled atmosphere, to a thermal program, where the temperature is ramped at a constant heating rate. Ideally, a single thermogram has been said to be equivalent to a very large family of comparable isothermal volatilization curves and, as such, constitutes a rich source of kinetic data for volatilization [2]. The classical way to study the kinetics of these processes by TGA starts from the assumption that the weight loss follows the Arrhenius equation: § E · k (T ) = A ⋅ exp¨ − (17) ¸ © RT ¹ where k, the reaction rate depends on the temperature, T. E, the activation energy may be considered constant in each degradation process (that appears as a clear step in the mass trace) since the degradation mechanism is supposed not to change in a narrow range of temperatures. A is another constant that, in the case that the kinetics follow a n reaction order model, may be calculated from A = mt , where n is the reaction order. 2.2.
Other models
Many other models start from the Arrhenius equation, modified by SestákBerggren [3]: dα n p = k (α ) m (1 − α ) [− ln (1 − α )] dt where n, m and p are constants. Two of the most used derivative models based in that equation are Freeman and Carroll [14], and Friedman [15].
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
275
There are also some integrable models, like Ozawa [19], Flyn [8] and the one proposed by Popescu, C. [20], that allows for calculation of n and A from TGA data obtained at several heating rates. The method proposed by Conesa [6] considers that some organic fractions of the sample decompose independently giving an organic residue and an inorganic fraction. This model gave good correlation with the weight loss derivative data for different rubbers [10]. The method proposed by Carrasco F. and Costa J. [3] has been successfully applied to the thermal dagradation of polystyron. Although the application of these models to specific cases has been checked by detailed statistical studies, all of them are based on the Arrhenius equation and can not be generally applied to material degradations following very different kinetics. Moreover, its methodology is sometimes unease. It has been said for methods based on one simple heating rate that quite different reaction models fit the data equally well (from the statistical point of view) whereas the numerical values of the corresponding Arrhenius parameters crucially differ (Vyazovkin [29]). Its physical meaning is obscure and no predictions can be done outside the range of experimental temperature (Vyazovkin). Other authors deemed the Arrhenius model inadecuate for the calculation of kinetic parameters from non-isothermal thermogravimetric curves [13]. Moreover, arising from the Kinetics Workshop, held during the 11th International Congress on Thermal Analysis and Calorimetry (ICTAC) in Philadelphia, USA, in 1996, sets of kinetic data were prepared and distributed to volunteer participants for their analysis using any, or several, methods they wished. The results obtained by each researcher were different than the ones obtained by the others, Brown, M. et al. [2]. All of this confirms our believing that the existing models cannot be generally applied and sometimes it is not clear which one is the best suitable to each case. That is the reason to propose an alternative model that will be described in the following sections. 2.3.
Logistic model proposed
This model proposes to decompose the TGA trace in several logistic functions, assuming that each of the functions represents the degradation kinetics of each component of the sample. Even in the case of homogeneous materials, it is supposed that several different structures may exist, each one following its specific kinetics that may be different from the others. In this model, it is assumed that a TGA trace may be fitted by a combination of logistic functions: k
Y (t ) = ¦ wi f (ai + bi t ) i =1
f (t ) =
et 1 + et
(17)
where i = 1,2,, k represent different components from the weight loss process point of view, not necessarily different chemical compounds. In order to modelise the weight loss along the time, it is supposed that the candidate functions to estimate the weight along the time (t , Yi (t ) ) have to verify that when t → ∞ the response Yi (t ) should tend to 0. It implies that the bi parameters have to be negative. When t = 0 , the Yi (t ) function has to tend to the mass of the original
276
RICARDO CAO AND SALVADOR NAYA
sample and the Yi (t ) functions have to tend to the mass of each component in the original sample, that is, the wi (t ) constants correspond to the weight loss of the sample in each weight loss process. These weight loss processes generally appear as clear steps of the TGA trace. The function Y (t ) that represents the overall TGA trace may be expressed as a sum of Yi (t ) functions like this: Yi (t ) = wi f (ai + bi t ) The constansts ai and bi are calculated taking into account that the bi values represent the slope of the weight steps while the change of scale comes from the ai / bi rates. The wi values mean the weight of each component in the sample.
16 14 12 10 8 6 4 2 0
5
10 t
15
20
Figure 3. Function obtained from the sum of 4 simple logistic functions. g (t ) = 5 f (12 − 4t ) + 4 f (14 − 2t ) + 7 f (43 − 5t ) + f (16 − t ) 2.3.1.
Kinetic study using the logistic model
Once the regression function of the TGA trace was obtained, it is inmediate to obtain derivatives. Thus, for example, the first derivative of the TGA trace (DTG), which is used by many kinetic models since it represents the weight loss rate along the time, may be expressed by the following equation (18): k
dTGA(t ) = ¦ wi bi f ' (ai + bi t ) i =1
f ' (t ) =
et
(1 + e )
t 2
(18)
277
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
Analyzing, for example, its first component, i=1, e12−4 x f (a1 + b1t ) = 1 + e12− 4 x Its first derivative is: f ' (a1 + b1t ) =
− 4e12− 4 x
(1 + e
)
12 − 4 x 2
Its second derivative results: f ' (a1 + b1t ) = −16e12− 4 x
(− 1 + e ) (1 + e ) 12 − 4 x
12 − 4 x 3
1.5 1 0.5 0
1
2
x
3
4
5
-0.5 -1 -1.5
Figure 4. The plots of the f function and its first and second derivatives are shown. Other possible interpretation of the logistic parameters is obtained aplying a change of scale and position. It improves equation (18) since the new values show the weight loss rate b'i and the exact position in the time axis of the point corresponding to the half weight loss of each step a'i : § t − a 'i Yi (t ) = wi f ¨¨ © b' i
· ¸¸ ¹
(19)
Anyway it simply consists in a linear transformation of the new parameters that may be obtained indistinctly.
278
RICARDO CAO AND SALVADOR NAYA
15 10
A 5
0
2
4
6
8
x
10
12
14
16
18
-5
15 10
B
5 0
2
4
6
8
x
10
12
14
16
18
-5 -10 -15
Figure 5. The overall function, obtained from the sum of the 4 functions previously described (dashed curve) and the first (A) and second (B) derivatives.
2.3.2.
Logistic parametric fitting
For the fitting of data to a logistic function it is needed the calculation of values for the equation parameters. This task is usually performed by using a statistical software. In this case, the non linear regression and derivatives packages of the S-plus software. The algorithm used for the non linear regression is: y i = m( xi , θ ) + ε i , i = 1,2,, n
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
279
where the response variable and the independent variable values are represented by y i and xi , respectively. θ is the parameters vector, that is estimated by least squares and ε i are the errors, with normal distribution, mean zero and constant variance. The residuals of the model are defined as: ei (θ ) = y i − m( xi ;θ ), i = 1,2,, n The parameters of the model were estimated by the non linear least squares method. The fundamentals of this method were described by Gay, D. M. [14] The Levenberg-Marquart method routine for generation of the approximation sequence to the minimum point, based in the “trust region” algorithm, was used for the calculation of the parameter values that minimize that sum. This algorithm was discussed by Chambers, J. M., and Hastie, T. J. [4] . Its application to the computer calculation was described Dennis, J. E. et al. [7]. One of the problems that appear when fitting is to choose some statarting points for the different parameters to estimate. To do this, one possibility consists in, by observation of the TGA trace, to try to estimate the inflexion. Since this method is not easy and requires previous expertise, we propose a method based in the idea of supposing that the data follow a logistic regression (Equation 3). So it is possible to fit the logit Y (t ) / w function to a straight line which y origin is ai and which slope is bi . The reason for this linear fitting is explained as follows: exp(a + bt ) Y (t ) exp(a + bt ) Y (t ) 1 + exp(a + bt ) w = = = exp(a + bt ) Y (t ) = wf (a + bt ) = w 1 Y (t ) w − Y (t ) 1 + exp(a + bt ) 1− 1 + exp(a + bt ) w So: § Y (t ) · ¨ ¸ § Y (t ) · w ¸ = a + bt logit¨ ¸ = log¨ ¨ Y (t ) ¸ © w ¹ ¨1− ¸ w ¹ © 2.3.3.
Application of the logistic regression to different cases
In order to validate the model in extreme situations, some TGA experiments exhibiting very different behaviours were considered. The first one corresponds to the hexahydrophtalic anhydride that underwent a typical evaporative process. It consisted in a single weight loss step with maximum weight loss rate at the end of the step [18]. The second one corresponds to the analysis of wood from Eucaliptus globulus. The wood is a very complex material where the main components are cellulose and lignin. Its thermal behaviour is quite complex and overlapped processes seem to be involved. Apparently, it decomposes in four main steps. Other complex cases considered were wood from Cupressus sempervirens and plasticized poly-(vinyl chloride). Hexahydrophtalic anhydride case
In the case of a hexahydrophtalic anhydride experiment, since there is only one weight loss step, only one logistic function is needed to modelise the TGA trace. In other words, the equation that describes the overall process is Y (t ) = wf (a + bt )
280
RICARDO CAO AND SALVADOR NAYA
The lineal fitting of Equation (7) to the TGA data, by least squares, gives the values for the a and b parameters, resulting the following expression that describes the behaviour of the hexahydrophtalic anhydride in that experiment: 12.93 exp(17.48 − 0.024t ) 1 + exp(17.48 − 0.024t )
8 7 2
3
4
5
6
Weigth
9
10
11
12
13
Y (t ) =
0
100
200
300
400
500
600
700
800
Time
Figure 6. TGA trace obtained from a hexahydrophtalic anhydride experiment.
0 5
15
25
35
45
y
55
65
75
85
95
The case of cupressus wood
0
400
800
1200
1600
2000
2400
2800
3200
3600
4000
4400
4800
x
Figure 7. TGA plot obtained from a cupressus wood sample.
281
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
0 -5
-4
-3
-2
-1
logit(y)
1
2
3
4
5
Linear fitting of different parts of the curve were performed in order to find the parameter values: In order to do this, the logit(y) function is plotted versus x and a line is fitted by the S-Plus software:
0
100
200
300
400
500
600
700
x
Figure 8. Plot of the logit (y) function versus time in the range from 0 to 700 s. The fitting was performed in two ranges of data. The first one is [0:700]. Since the neighbour values to 0 and 700 result in log 0, ten points will be suppresses in each end of the range. A line was fitted between 10 and 690, resulting in w1=8.5, a1=5.12, b1=0.012. These values were used to initiate the model. The next range [700:1640], that includes a step, was operated in the same way, resulting the following values a2=9.175879, b2=-0.004551135 with 1631 total degrees of freedom and residual standard error= 0.7296343 Finally, the model was fitted with these starting values. Parameter
Value
Std. Error
t value
w1
10.53520
0.0995712
105.8060
a1
3.79104
0.1033680
36.6750
b1
-0.00765
0.0001785
42.8834
w2
89.90570
0.0350401
2565.7900
a2
12.63650
0.0331148
381.5970
b2
-0.00571
0.0000151
378.0200
282
RICARDO CAO AND SALVADOR NAYA
Fitting for the eucaliptus experiment
In this case four logistic components were assumed: In this case, the fitting to obtain the starting values was performed in four ranges, giving the following values for the parameters of the model: Parameter
value
Std. Error
t value
13.04790
0.0579730
225.0690
a1
5.06769
0.0766370
66.1258
b1
-0.01132
0.0001576
71.8585
w2
41.09420
0.1888330
217.6220
a2
15.45890
0.0789307
195.8550
b2
-0.00851
0.0000429
198.0830
w3
22.53420
0.1765470
127.6390
a3
162.17000
3.3659800
48.1791
b3
-0.08569
0.0017780
48.1930
w4
23.19600
0.0503995
460.2440
a4
100.28700
1.2685400
79.0574
b4
-0.04103
0.0005184
79.1501
70 60 30
40
50
Weigth
80
90
100
w1
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Time
Figure 9. Plot of the original TGA trace compared to the estimated function.
283
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
Fitting in the case of PVC
8 0
2
4
6
Weigth
10
12
14
16
In this case the fitting to obtain the starting values was performed in four ranges, resulting in the following equation:
0
500
1000
1500
2000
2500
3000
3500
Time
Figure 10. Fitting in the case of PVC. 2.287 f (0.631 − 0.09t ) + 5.276 f (14.45 − 2.15t ) + 3.061 f (22.47 − 2.09t ) + 6.86 f (36.69 − 2.75t ) 2.3.4.
Physical meaning of the parameters
Once the fittings were performed in different cases it is clear that the wi (t ) values represent the magnitude of each weight loss process. The bi parameters have the meaning of sample volatilization rate, while the ai value represents the scale. 2.4.
Conclusions
1. This method allows for including at once the overall trace from a TGA experiment, while the classical methods can only be applied to a single step each time. 2. Overlapped degradation processes can be explained by this method. Since the existing models were thought to explain single processes, they generally fit very badly to overlapped processes. 3. It explains the thermal degradation of each component of the sample by a single function that may be easily understood from the physical point of view. 4. This model shows the contribution of each single degradation process to the overall process. It is very useful in order to improve thermal stability of materials. 5. It allows for measuring the statistical goodness of the fitting by signification contrast. 6. It allows applying classical kinetic models, like Arrhenius, to each of the single degradation functions. It is useful in order to compare with other materials in specific cases where some models proved to work well. Permite, para su comparación con materiales ya estudiados, la aplicación de modelos cinéticos clásicos del tipo Arrhenius a cada una de las funciones de degradación individuales.
284
RICARDO CAO AND SALVADOR NAYA
7. It is easier to apply the classical kinetic models on the functions obtained by our method than on the row TGA data, since the row data contents noise that affect the derivative and integral estimations. In whose classical methods are based. 8. The asymptoticity is perfectly reproduced at the beginning and end of each degradative process. 3. Functional non-parametric model for materials discrimination by thermal analysis 3.1.
Introduction
An important topic in Material Science is the classification of materials. The information obtained by thermo gravimetric analysis can be used to this aim. In this work, functional regression by nonparametric methods was used for the classification of different polymers. The method can be extended to any kind of material that can be analyzed by TGA. Pattern recognition techniques deal with classification of observations in a finite number of classes (Watanabe, [30]). It can be done by several parametric models, such as the discriminant analysis. Nevertheless, in the case of curves, the problem is functional and non parametric models are more suitable, since they take into account all the information from the sample (Ramsay and Silverman, [22]). The method of classification proposed in this work is based in functional regression by nonparametric methods. Several PVC and wood samples were classified by this method. Finaly, many simulated experiments were used to evaluate the accuracy of the method. 3.2.
Nonparametric classification method
The nonparametric methods do not require previous estimation of any parameter. In this case, the kernel method was chosen. It is a nonparametric discrimination method that has been proved to work well in many cases (Ferraty and Vieu, [11]). The nonparametric Bayes clasification rule was used to classify the sample. It assigns a future observation to the highest probability class. The different TGA curves, X i , were taken as explanatory variable X i , and the classes a sample of the response Yi . Considering a new TGA curve, obtained from a material to classify, the estimator of the posterior probability is given by: n § x − Xi · ¸ 1{Yi = j } K ¨¨ ¦ h ¸¹ i =1 © j rˆh ( x) = (20) n § x − Xi · ¨ ¸ K ¦ ¨ h ¸ i =1 © ¹ Equation (20) is a versión of the Nadaraya-Watson estimator reported by Ferraty and Vieu, [11]. The L1 norm will be used as distance between curves and h is the bandwidth, or smoothing parameter. The classification rule is calculated from the estimator obtained in equation (20). This rule minimizes the probability of incorrect classification, that is:
285
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
d h ( x) = arg max{rˆh( j ) } 0≤ j ≤G
( j) h
where rˆ represents the estimation of the probability of the sample belonging to the j class. The parameter smoothing h will be chosen that minimizes the probability of misclassifying a future observation. This bandwidth parameter will be taken as hCV , that minimizes the following cross-validation function: n
CV (h) = n −1 ¦1{Y ≠ d − i ( X ) } i =1
i
h
i
−i h
where d is the classification rule, built up without the i-th observation. Finally, given a new sample and its TGA trace, denoted as x, the distances from this trace to the others will be calculated and rˆh( j ) will be estimated for each class of material j ∈ {0,1,2, , G} . The material will be assigned to the k class that maximizes rˆh( j ) ( x). 3.3.
Application to PVC samples
The method of classification proposed was applied to a sample of 16 PVC items, plasticized in different degrees. The sample weight was about 35 mg in all the cases. The TGA experiment consisted in a heating ramp from 25 to 600 ºC at 10 K/min followed by an isothermal step at 600 ºC for 15 minutes. A 50 ml/min purge of air was kept along the experiment. 110.0 103.18 96.364
82.727 75.909
WtPercent ( [%]
)
89.545
69.091 62.273 55.455 48.636 41.818
35.0 20.0
21.875
23.75
25.625
27.5
29.375
31.25
33.125
time [min]
Figure 11. Overlay of sixteen TGA curves obtained from PVC.
35.0
286
RICARDO CAO AND SALVADOR NAYA
110.0
700.0
100.0 600.0 90.0
W tPerc ent ( [% ]
500.0
80.0
Tem p ( [°C ]
PVC R ígido PVC Flexible
70.0
400.0
60.0
300.0
)
50.0
)
200.0
40.0 100.0 30.0 20.0 0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
0.0
80.0
t im e [m in]
Figure 12. Overlay of two TGA traces, obtained from different samples of PVC. Each sample was classified by keeping itself excluded from the reference population. A 99.4 % of correct classification was obtained by the application of the method proposed to the 16 PVC samples. It can be seen in Figure 13, which plots the cross-validation function. 3.4.
Simulated experiments
A simulation study was performed in order to check the method. Three kinds of wood were chosen, since these materials are very much alike in composition and thermal behaviour. It is not easy to classify this kind of materials only by TGA experiments. From actual experiments of the three samples, two sets of experiments were simulated by a logistic mixture model. The simulation was performed for each of the three groups, using the function: kr
ϕ ( r ) ( x) = ¦ w (jk ) f (a (jr ) + b (j r ) x) r
j =1
(21)
x) f ( x) = 1+exp( exp( x ) , ( r = 1,2,3.)
The parameters for the model were simulated following a k r -dimension Normal distribution. Two different situations were considered: parameters being independent and dependent. 1.2
1.0
.8
.6
.4
CV(h)
.2
0.0 2 5. 53 23 .1 0 99 13 2 .5 85 97 7 .0 15 79 5 .7 55 71 28 . 86 67 5 .7 11 66 0 .0 08 65 8 .9 92 64 9 .0 73 64 00 . 32 64 9 .3 22 64 5 .1 19 64 1 .7 15 64 3 .9 50 50 5 7 7. 19 39 8. 00 1.
h
Figure 13. Plot of the cross-validation function against the smoothing parameter.
STATISTICAL APPLICATIONS TO THERMAL ANALYSIS
287
The first set of simulated experiments consisted of 90 TGA traces, whose probability to belong to each of the three groups was 1/3. The cross-validation bandwidth and the minimum of the cross-validation function were obtained from that simulated traces. Then, a second set of 1000 traces was simulated, using the same probability than in the first set. Each curve was classified by the estimated non parametric rule of Bayes. The result of the classification was compared with the group from wich the trace was simulated. The percent of the 1000 traces that were correctly classified was taken as an estimation of the probability of correct classification. The results show that the lower the varianze of the model the langer the percent of correct classification, reaching 92 to 95 % correct classification for varianzes with values of 1/8 of the original varianze. Generally, the percent of correct classification slightly increases in case of dependent data. References
1.
2.
3. 4. 5. 6. 7.
8. 9.
10. 11. 12. 13.
Arnold M., Veress G. E., Paulik J., Paulik F. (1982). A critical reflection upon the application of the Arrhenius model to non-isothermal thermogravimetric curves. Thermochimica Acta; 52: 67-81. Brown M.E., Maciejewski M.,Vyazovkin S., Nomen R., Sempere J., Burnham A., Opfermann J., Strey R., Anderson H.L., Kemmler A., Keuleers R., Janssens J., Desseyn H.O., Chao L., Tong B., Roduit B., Malek J. and Mitsuhashi T. (2000). Computational aspects of kinetic análisis. Thermochimica Acta; 355: 125-143. Carrasco F. and Costa J. (1989). Modelo Cinético de la descomposición térmica del poliestireno. Ingeniería Química; 121-129. Chambers J. M. and Hastie T. J. (1992). Statistical Models in S. Pacific Grove, CA Wadsworth and Brooks, Chapter 10. Cleveland, W. S. (1979). Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74, 829-836. Conesa J. A., Marcilla A. (1996). Kinetic study of the thermogravimetric behavior of different rubbers. Journal of Analytical and Applied Pyrolysis; 37: 95-110. Dennis J. E., Gay D. M. and Welsch R. E. An Adaptive Nonlinear Least-Squares Algorithm ACM Transactions on Mathematical Software, Springer, Berlin 1981, 348-368. Doyle C. D. (1961). Kinetic analysis of thermogravimetric data. Journal of Applied Polymer Science; 15, 285-292. Fan, J. and Gijbels, I. (1995). Data-driven bandwidth selection in local polynomial fitting: variable bandwidth and spatial adaptation. Journal of the Royal Statistical Society, Series B, 57, 2, 371-394. Fan, J. and Gijbels, I. (1996). Local polynomial modelling and its applications. Chapman and Hall. London. Ferraty, F. and Vieu P. (2002). The functional nonparametric model and application to spectrometric data. Computational Statistical, 17, (4). 545-564. Flynn J. H.,Wall L. A., Quick A. (1966). Direct Method for the Determination of Activation Energy from Thermogravimetric Data. Polymer Letters; 4: 323-328 Francisco, M. and Vilar-Fernández, J. (2001). Local polynomial regression estimation with correlated errors. Communications in Statistics: Theory and Methods, 30, 1271-1293.
288
14.
15.
16.
17. 18. 19. 20. 21. 22. 23. 24.
25. 26. 27. 28. 29. 30. 31.
RICARDO CAO AND SALVADOR NAYA
Freeman B. and Carroll B. (1958). The application of thermoanalytical techniques to reaction kinetics. The thermogravimetric evaluation of the kinetic of the decomposition of calcium oxalate monhydrate. Journal of Physical Chemistry; 62: 394-397. Friedman H. L. (1964). Kinetics of thermal degradation of char forming plastics from thermogravimetry. Application to a phenolic plastic. Journal of Polymer Science; Part C, 6: 183-195. Gasser,T. and Müller, H.G. (1979). Kernel estimation of regression functions. In Smoothing techniques for curve estimation, Lecture Notes in Mathematics, 757, 23-68. Springer-Verlag. Gay D. M. (1984). A trust region approach to linearly constrained optimization in Numerical Analysis. Springer, Berlin, 171-189. Nadaraya, E. A. (1964). Remarks on nonparametric for density functions and regression curves. Theory of Probability and its Applications. 15, 134-137. Ozawa T. (1965). A New Method of Analyzing Thermogravimetric Data. Bulletin of the Chemical Society of Japan; 38: 1881-1886 Popescu, C. (1984). Variation of the maximum rateo f conversión and temperatura with heating rate in non-isothermal kinetics. Thermochimica Acta; 82: 387-389. Priestley, M. B. and Chao, M. T. (1972). Nonparametric function fitting, Journal of the Royal Statistical Society, Series B, 34, 385-92. Ramsay, J. and Silverman, B. (1997). Functional Data Analysis. Springer-Verlag, New York. Ruppert, D. and Wand, P. (1994). Multivariate locally weighted least squares regression. The Annals of Statistics, 22, 1346-1370. Ruppert, D., Sheather, S. J. and Wand, M. P. (1995). An effective bandwidth selector for local least squares regression. Journal of the American Statistical Association, 90, 1257-1267. Savitzky, A. and Golay, M. (1964). Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Analytical Chemistry. 36, 1627-1639. Seifert, B. and Gasser T. (1996). Finite sample variance of local polynomials: analysis and solutions. Journal of the American Statistical Association, 91, 267-275. Sestak, J. and Berggren, G. (1971) Study of the kinetics of the mechanism of solidstate reactions at increasing temperatures. Thermochimica Acta, 3, 1-12. Stone, C.J. (1977). Consistent nonparametric regression. The Annals of Statistics, 5, 595-620. Vyazovkin, S. (1996). A unified approach to kinetic processing of nonisothermal data. International Journal of Chemical Kinetics, 28, 95-101. Watanabe, S. (1985). Pattern Recognition. Human and Mechanical. Wiley, New York. Watson, G.S. (1964). Smooth regression analysis. Sankhya, Series A, 26, 359-372.