A Practical Guide to Lightcurve Photometry and Analysis
Brian D. Warner
A Practical Guide to Lightcurve Photometry and Analysis Foreword by Alan W. Harris
Brian D. Warner Minor Planet Observer 17995 Bakers Farm Road Colorado Springs, Colorado 80908 U.S.A.
[email protected]
Library of Congress Control Number: 2005933716 ISBN-10: 0-387-29365-5 ISBN-13: 978-0387-29365-3 Printed on acid-free paper. © 2006 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 springer.com
(EB)
Foreword It is a pleasure and an honor to offer a few words of forward to Brian Warner's guide to photometry. In his preface, he makes a considerable point about amateurs and professionals, and those who dare or deign to step across the line supposedly dividing the two. Here I would like to make a few observations about the two monikers, and suggest that there is not, or at least should not be, a distinction between "amateur" and "professional." In preparing these remarks I referred to Webster’s New Collegiate Dictionary (1960 edition; not so new anymore, but that was when my collegiate experience began): am´a·teur, n. [F., fr. L. amator lover, fr. amare to love.] 1. One who cultivates a particular pursuit, study, or science, from taste, without pursuing it professionally; also, a dabbler. 2. In sports and esp. athletics, one who is not rated as a professional. Well... a "dabbler" eh? "not rated as a professional"? No wonder we have an identity problem here. Somehow in my youth as an amateur astronomer I missed this connotation of the term. To me, the meaning of the term amateur was dominated by its root, "to love," that is, one who does what he does out of love of the subject, not for remuneration (to the extent one can get away with that). In that context, most "professional" astronomers I know are also "amateurs": they love what they are doing and choose the profession primarily for that reason, not how much money they could earn. Indeed, I have often advised students that if they are smart enough to eke out a living in astronomy they are smart enough to get rich quick in some other field, thereby freeing themselves a bit later in life to become a "gentleman astronomer." This brings me to another perspective on "amateur" versus "professional." Most folks need to earn a living somehow, so almost every "amateur" astronomer is a "professional" at something else. And curiously, most of us who call ourselves "professional" astronomers are amateurs in other fields that are essential to our pursuits. This basic fact of life further blurs the distinction between amateur and professional, in any field. Indeed, my own graduate training is as a theoretician. I have never taken a single course in observational astronomy. So as an observer, I'm one of you, an amateur, self-taught in my own backyard. Another conspicuous example is computer programming. "The other Brian Warner," author of this book, is a professional when it comes to software development, and writing for that matter, talents many of my "professional" astronomer colleagues sorely lack. A result of this is that there are software packages out there, written by professional software writers for the amateur community that are far more powerful, efficient v
vi
Foreword
and user-friendly than their "pro" counterparts. The people who wrote them may be "amateur" astronomers, but they are highly professional in their computer skills. Amateurs now have at their disposal computer tools for telescope pointing, focusing, data taking, and reduction that far surpass what is in use at most professional observatories. You'll find several such packages mentioned and described in this book. I'll now turn to a bit of history. In the late 1990's, my colleague Ted Bowell and I noted that there was a dearth of activity among American amateur astronomers in minor planet (asteroid) observing, compared to various overseas observers, notably in Japan and Italy. Ted had just been to an amateur meeting in Italy and was favorably impressed by their organization and activities. We decided to organize a meeting of amateurs and professionals engaged in asteroid observations to try to stimulate interest in the amateur community in the United States. The meeting was hosted at Lowell Observatory in Flagstaff, AZ, 23–24 April, 1999. I think we were successful in stimulating interest. During the meeting, it became apparent, to me at least, that a key to amateur participation was the availability of understandable and user-friendly computer tools. At that time, amateur participation in asteroid work was mainly in astrometry, aided by the program Astrometrica, written by Herbert Raab of Linz, Austria. This program allowed amateurs to do positional astrometry on asteroids in a friendly environment without having to understand fully every detail of the process (n.b., this is not the same as "in a state of ignorance"!). At the time of the Flagstaff meeting, I commented that in order to get amateurs involved in photometric observations of asteroids, what was needed was a Photometrica program. In the years since that time, Photometrica, admittedly not by that name, has been written, in fact in several versions, as cited in this book. One is Warner's own Canopus program, and this book serves as a companion for observers who want to learn the game of asteroid or variable star photometry, using either Canopus or one of the several other options mentioned in the book. I think with the development of these user-friendly programs, and now with the publication of this book, CCD (charge–coupled device) photometry for amateurs has come of age, and I look forward to the contributions that will inevitably follow. Alan W. Harris* La Canada, CA August 2003 *There is also another Alan Harris (also engaged in asteroid research, in Berlin, Germany). In this case we are both Alan William Harris, so a middle initial doesn't differentiate. I refuse to identify him as "the other" Alan Harris, as I think neither should suffer a diminutive term. The same goes for Brian Warner.
Preface Pardon the “D” in the author’s name on the cover (it stands for “Dale” by the way). I’m the other Brian Warner. The real Brian Warner is the famous and distinguished one. He’s the author of High Speed Photometry and Cataclysmic Variable Stars. There yet is another Brian Warner. He goes by the stage name of Marilyn Manson. I’ve received some very strange phone calls late at night from some of Manson’s fans. “No, I’m the other Brian Warner.” The disappointment at the other end was nearly devastating. So, I use the “D” to make sure it’s clear who I am and so that you won’t think I’ll break out into song at any moment. The real Brian Warner and I do have something more in common than our names: a deep interest in lightcurves. My interest is not so much in cataclysmic variables but asteroids and eclipsing binaries. When I was in junior high school, I had hopes of being a professional astronomer someday and specializing in close contact binaries. Many years have passed and I’m not a professional astronomer. However, I do still occasionally observe eclipsing binaries as a nearly full-time “gentleman astronomer.” My first efforts in asteroid lightcurves began in the 1970s when I worked with the late Terry Schmidt at his Tiara Observatory in South Park, CO (yes — the South Park). It was my great fortune to leave my small telescope out on the porch on a day he was walking around the new neighborhood. He spotted the scope and knocked on the door. That started a 25-year friendship with Terry as my astronomy mentor. One of my three asteroid discoveries, 34398 Terryschmidt, is named in his honor. I was pleasantly surprised at the popularity of the first edition of the Practical Guide, and so I want thank everyone, the reviewers especially, who embraced the book. As with any book on a complex topic meant for the reader with a beginner to intermediate technical understanding of the topic, the first edition covered some things in less detail than it might have and certainly left many things out. That’s what second editions are for: to keep the material fresh, fill in gaps from the first edition, and tell of new ways to approach old problems. These are goals for this second edition in addition to, of course, being a recruiting tool to bring more people into the exciting field of photometry and lightcurve analysis. Since the first edition, I’ve developed and learned of what I believe are easier and more straightforward methods for reducing data, the process that is probably the biggest “monster” the beginning photometrists encounters, or at least fears. Those methods are presented in this second edition in lieu of those from the first. I could include both sets but that would increase the size of the book beyond what the editors would allow and, more important, would serve more to confuse than help the reader. The original methods follow those described in other well-known vii
viii Preface
texts such as Arne Henden and Ron Kaitchuck’s Astronomical Photometry, which should be on the shelf of any photometrist, no matter what level. Along with changes in the reduction methods come those to the observing guidelines. The second edition has streamlined those so that you can go to the telescope with a much clearer game plan and so concentrate more on getting data of your target than on wondering if you’re doing the right thing in the right way. You’ll still find information about analyzing lightcurves, asteroid and eclipsing binary – after all, the title does include “Analysis.” You’ll also get some more insight about measuring images; in particular, how aperture sizes affect data. When trying to reach a higher level of work, you need to understand some of the finer details – without necessarily having to know the math or hard theory involved – so that you can use your software to the best of its ability. As I noted in the first edition, never trust a computer. Just because it gives you an answer doesn’t mean you shouldn’t question it. The more you know about what’s involved in getting good data, the better a scientist you will be. I’ll keep this brief, so before I conclude, I want to thank Alan Harris, Arne Henden, Richard Binzel, and Petr Pravec again for their continuing support. I also want to thank Robert Buchheim and Richard Miles. Both have contributed enormously to my understanding of photometry’s “dark side” and alternate methods. Their knowledge and generosity made this second edition far more complete than it might have been otherwise. A number of people read the revised manuscript, trying to make sure that the material was both clear and accurate. They were Richard Binzel, Robert Buchheim, Robert Stephens, Robert Koff, Jerry Foote, and Greg Crawford. My thanks to them for finding the glaring errors. Any that remain are solely my responsibility. Thanks to John Watson of Springer for taking the proposal for a second edition to the editorial board for its approval as well as his suggestions. Thanks also to Dr. Harry Bloom and Christopher Coughlin, also of Springer, for taking on the task of dealing with a writer who was forever the frustration of his English teachers. Without them, there would have been no second edition. This one’s for Margaret, too.
Brian D. Warner Palmer Divide Observatory July 2005
Contents GETTING STARTED ...........................................................................................1 1.1 1.2
What Is Lightcurve Photometry?.............................................................3 What Lies Ahead......................................................................................4
TARGETS OF OPPORTUNITY..........................................................................7 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9
Asteroids ..................................................................................................7 Variable Stars.........................................................................................12 Eclipsing Binary Lightcurve Characteristics..........................................15 Cataclysmic Variables............................................................................17 Cepheids.................................................................................................18 Long Period (Mira) Variables ................................................................18 Semi-Regular Variables .........................................................................18 Other Targets .........................................................................................19 Summary................................................................................................20
PHOTOMETRY FUNDAMENTALS................................................................21 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8
A Little Bit of History............................................................................21 The First Color-Based Systems..............................................................22 The Johnson–Cousins Standard .............................................................23 Setting the Standard ...............................................................................24 Seeing Red .............................................................................................24 CCDs and Standard Magnitudes ............................................................24 Landolt Standards ..................................................................................25 Henden Sequences .................................................................................26
THE PHOTOMETRY PRIMER........................................................................27 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10
Instrumental Versus Standard Magnitudes.............................................27 Air Mass.................................................................................................28 Extinction...............................................................................................29 Transforms and Nightly Zero-Points......................................................31 Differential Versus All-Sky Photometry................................................32 Seeing and Scintillation..........................................................................35 Matching Pixel Size to Seeing ...............................................................36 Bias Frames............................................................................................36 Dark Frames...........................................................................................37 Flat Fields...............................................................................................37 ix
x
Contents
4.11 4.12
Photometry Apertures and Annuluses....................................................41 Reporting Errors.....................................................................................46
PHOTOMETRIC REDUCTIONS .....................................................................47 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 5.20 5.21
The Different Path..................................................................................48 The Differential Formula .......................................................................49 Clear to Visual Conversions...................................................................50 First-Order Extinctions – Are They Really Necessary? .........................50 The Same Color Index ...........................................................................52 Transforms First.....................................................................................53 Finding Transforms................................................................................53 The Hidden Transforms .........................................................................56 First-Order Extinction ............................................................................57 Finding First-Order Extinction (Modified Hardie) ................................59 A Variation on the Modified Hardie method .........................................61 Finding First-Order Extinction (Comp Star) ..........................................62 Comparison and Target Standard Color Index Values...........................63 Find the Color Indices of the Comparisons and Target..........................64 The Comparison Star Standard Magnitudes...........................................66 Finding the Comparison Star Standard Magnitudes...............................67 Target Standard Magnitudes ..................................................................68 Finding the Standard Magnitudes of the Target.....................................68 The Different Path’s End .......................................................................71 A Minimalist Approach .........................................................................71 Using the Minimalist Approach for Standard Magnitudes.....................72
SECOND ORDER EXTINCTION.....................................................................77 6.1 6.2 6.3 6.4
Deriving a Single-Color Approach ........................................................77 The Slope of Slopes Method ..................................................................78 When Is the Second-Order Term Applied?............................................80 Summary................................................................................................81
TELESCOPES AND CAMERAS.......................................................................83 7.1 7.2 7.3 7.4 7.5
The Telescope ........................................................................................83 The CCD Camera...................................................................................87 Digital and Web Cameras ......................................................................95 Filter Wheels ..........................................................................................96 Guiding Considerations..........................................................................97
IMAGING AND PHOTOMETRY SOFTWARE .............................................99 8.1 8.2 8.3
Image Acquisition Software...................................................................99 Specific Features ..................................................................................100 Photometry Software............................................................................102
Contents
8.4 8.5
xi
Conforming to Accepted Standards .....................................................106 Manual Versus Automated Measuring.................................................109
COLLECTING PHOTONS ..............................................................................111 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10 9.11 9.12
The First Step – Getting the Right Time ..............................................111 Planning the Observing Program .........................................................112 Selecting Targets..................................................................................113 General Considerations ........................................................................114 Asteroids ..............................................................................................115 Variable Stars.......................................................................................117 The Observing Run ..............................................................................118 Measuring Images ................................................................................120 From Image to Data .............................................................................123 The Hands-On Approach for Measuring Images .................................125 Checking the Comparison Stars ...........................................................129 The Automated Approach to Measuring Images .................................130
ANALYZING THE DATA ...............................................................................133 10.1
The Quality of Data..............................................................................133
PERIOD ANALYSIS.........................................................................................137 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 11.10 11.11
About Merging Data and Setting Zero-Points......................................137 A Simple Start......................................................................................139 To What Precision................................................................................142 Refining the Search Process.................................................................143 The Amplitude of the Lightcurve.........................................................144 Aliases in Depth...................................................................................145 Plotting the Half-Period .......................................................................149 A Specific Alias Example ....................................................................150 The Case of 3155 Lee ..........................................................................151 Period Analysis on a Spreadsheet........................................................152 From Lightcurve to Shape...................................................................157
BUILDING STAR SYSTEMS ..........................................................................161 12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8
Getting Started .....................................................................................161 Binary Maker .......................................................................................164 The Many Possibilities.........................................................................166 The Effects of Changing the Inclination ..............................................166 The Effects of Temperature Changes in the Primary ...........................168 The Effects of Temperature Changes in the Secondary .......................169 The Effects of Changing the Mass Ratio .............................................171 The Effects of Gravity/Limb Darkening and Reflection......................172
xii Contents
PUBLISHING YOUR DATA AND RESULTS...............................................173 13.1 13.2 13.3 13.4
Confirm Before You Publish ...............................................................173 Asteroids ..............................................................................................175 Variable Stars.......................................................................................176 Learn by Association ...........................................................................177
BIBLIOGRAPHY ..............................................................................................181
GLOSSARY .......................................................................................................187
APPENDIX A: CONSTELLATION NAMES.................................................203
APPENDIX B: TRANSFORMS EXAMPLE ..................................................205 Example Transforms Data ...............................................................................205 The Spreadsheet...............................................................................................206 The Hidden Transforms ...................................................................................208 APPENDIX C: FIRST-ORDER (HARDIE) EXAMPLE...............................211 The Data...........................................................................................................211 The Spreadsheet...............................................................................................213 APPENDIX D: FIRST-ORDER (COMP) EXAMPLE...................................217 The Data...........................................................................................................217 The Spreadsheet...............................................................................................217 APPENDIX E: STANDARD COLOR INDICES............................................219 The Data...........................................................................................................219 The Spreadsheet...............................................................................................219 APPENDIX F: COMPARISON STANDARD MAGNITUDES ...................223 The Data...........................................................................................................223 The Spreadsheet...............................................................................................224 APPENDIX G: TARGET STANDARD MAGNITUDES ..............................227 The Data...........................................................................................................227 The Spreadsheet...............................................................................................228
Contents xiii
APPENDIX H: LANDOLT/GRAHAM STANDARD FIELDS.....................231
APPENDIX I: HENDEN CHARTS .................................................................253
APPENDIX J: HIPPARCOS BLUE–RED PAIRS .........................................285 Steps used to produce the List..........................................................................285 Hipparcos Blue–Red Pairs ...............................................................................287 APPENDIX K: SDSS BLUE–RED PAIRS......................................................291
INDEX ................................................................................................................295
Chapter 1
Getting Started I’m often asked by those who don’t do lightcurve work why they would ever want to do so? I can’t imagine not wanting to do science with one’s camera and telescope. However, we each had our own passions and reasons that led us into astronomy, and so while I may think, “Why not?” my answer should be something more. After all, as children there was nothing more frustrating than our mother’s reply, “Because!” when we asked why we couldn’t have another cookie. You’re probably familiar with the story about the three blindfolded men who each touched a different part of an elephant and then were asked what they had touched. The man who was lead to the tail replied, “It is a rope”. The second man, who touched the elephant’s hide, said “It is sandpaper”. The third man, after touching the trunk, exclaimed, “It is a snake.” Three points of view based on only the sense of touch. In a somewhat similar way, astronomers have tried to decipher the Universe. Unlike the three men, they can see the Universe but they cannot touch it, and what they see is often so far away and so faint that getting good data is difficult at best. Despite the obstacle of distance, astronomers have found ways to learn more and more from a very small amount of data, eventually being able to describe the Universe by “touching” enough of its parts and correlating theory against what’s been learned on planet Earth. By examining only the pinpoint of light from a star with various instruments and analyzing the data, an astronomer can tell you the temperature and size of a star thousands of light-years away. He can tell you if it has strong magnetic fields, if it is surrounded by a ring of gas or dust, and even if it has large sunspots. If the star is a binary, two stars circling around a common center of mass, he may even be able to tell you how much each star weighs. All of this and more are possible by studying a single point of light. How does photometry come into this? As you’ll see in the next section, photometry is measuring an object’s brightness. If the object changes brightness over time and you get enough data, you can plot the changes against time (or other parameters). This is a lightcurve. In the case of the binary star, the lightcurve can reveal the relative sizes of the two stars, their distances from one another, the shape of the orbit, the inclination of the orbit, whether there are star spots, and if there is matter being exchanged between the stars. If the lightcurve is measured through different color filters, the temperatures of the stars can be estimated. All of this and more is possible by studying a single point of light. What of my favorite targets, asteroids? A lightcurve of an asteroid can reveal the rate at which it rotates and whether it rotates about a single axis or is “tum1
2
Targets of Opportunity – Asteroids
bling” like a brick tossed into the air. From a lightcurve it’s possible to tell if the asteroid has a companion satellite and, if so, its distance from the primary and the relative size of each. If one gets several lightcurves over time, the shape of the asteroid and the orientation of its spin axis can be found. By imaging the asteroid in several colors, it’s possible to determine the likely composition of its surface. All of this and more are possible by studying a single point of light. That we can learn so much from nothing more than studying a pinpoint of light in the night sky never ceases to amaze me. It makes the Universe even more magical and tells of our capacity for imagination and to act on new ideas with new technology. We started with our eyes – very inefficient collectors of photons, then moved to film – only slightly better, and now have CCD and other imaging devices that can peer to the very edge of the Universe. The ability to study the night sky is not reserved for an elite few. With the advent of commercially available telescopes and CCD cameras, it’s possible for anyone to do science. Armed with a slightly larger than average scope and CCD camera, the backyard astronomer can image stars and galaxies fainter than the 5-meter Hale telescope did back in the day of film and plates. To me, that’s nearly unimaginable but true all the same. The mere fact that one can learn so much from so little may not be motivation enough for one to turn one’s efforts away from taking so-called “pretty pictures” and pursue science. One may think it too hard, that one can’t do anything useful, and that there is little recognition for such work. None of these could be further from the truth. Yes, there can be some steep learning curves – one of this book’s aims is to flatten and shorten those curves as much as possible. The list of useful work that the backyard astronomer can do is getting longer, not shorter. Granted, some things once on the to-do lists have been dropped because the professional community can do a better job with terabyte-generating surveys. However, old opportunities usually give way to new ones. If you’re looking for your name in lights, that’s probably not going to happen. However, on the whole, the professional community welcomes amateurs who can establish a minimum quality of work. Quite often professionals are nearly desperate for help from a community of observers not tied to government funding, with nearly unrestricted telescope time, who have developed automated/robotic observing to an art form. Believe me, there is quite a thrill seeing your name as a co-author on a paper of significant findings in a professional journal. It’s even more a thrill if you happened to make the initial discovery that leads to the results. Some people like solving puzzles. Taking and measuring images is like opening the box to one of those large puzzles with hundreds of pieces. Analyzing the lightcurve is akin to fitting those pieces together. If you’ve ever slipped that last piece into place after hours of work, then you know something of the feeling when you finally get the last set of data that allows you to determine the period of an asteroid or model a binary star.
Targets of Opportunity – Asteroids
3
Even if you’re not firmly convinced, give science a try – maybe just one or two nights a month. Don’t try to prove Einstein wrong right off the bat but simply measure the brightness of some variable stars and turn in your results, you’ll learn where later. You don’t have to give up everything else, and maybe, just maybe, you’ll find yourself spending more time doing science than not. Obtaining and analyzing lightcurves is just one of many ways to do science with your telescope. So, while this book may be dedicated to a particular aspect of research, don’t be afraid to explore other avenues such as spectroscopy and radio astronomy. Don’t forget that you can do more than one thing, too. Those pretty pictures can be analyzed for their scientific value as well. For those with the opposite point of view – all science all the time, try taking some pretty pictures some time. Then you’ll remember what got you hooked on astronomy in the first place. Once in awhile I’ll end a night’s run by taking shots of galaxies and clusters – not to see if there is a supernova or for some other scientific reason – but because sometimes all those single points of light should be admired for their beauty and wonder alone.
1.1
What Is Lightcurve Photometry?
Simply put, lightcurve photometry is measuring the variations in brightness of an object over time for the purpose of plotting and analyzing the data. Those changes can be caused by a rotating object such as an asteroid or by one star passing in front of another, as happens with an eclipsing binary star. Internal changes within a star also lead to changes in its light output. Pulsating stars like Cepheids and long-period variables (LPVs) are just some examples. It’s a subtle distinction, but finding a lightcurve means only to plot the data you obtain, usually as magnitude or flux versus time or phase. It does not mean that you determine the period, amplitude, or reason for the variations. For the purposes of this book, I’ll expand the definition to include finding the period and amplitude of the lightcurve plot. The reasons for the changes – at least beyond the obvious one of rotation – are best left to the more detailed and technical books listed in the Bibliography. The goal is to give you enough of the basics to obtain and measure images, plot the data, and attempt to find a period and amplitude of the curve. For an asteroid that’s about the most you can tell from a single curve. A number of curves over a short period, usually successive nights, might reveal that the asteroid is a binary by small unexpected “dips” in the general lightcurve. Numerous curves over months or years can lead to a determination of the asteroid’s shape and spin axis. As for eclipsing binaries, a wealth of information can be learned just by analyzing their lightcurves. In the analysis section of the book, I’ll show some examples using a program that converts lightcurve data into a model of a binary system.
4
1.2
Targets of Opportunity – Asteroids
What Lies Ahead
In the pages to come, you’ll learn some of the reasons that you should consider doing lightcurve photometry and analysis by looking at some specific projects that you can undertake. Before you can take images, you should understand some of the fundamentals about photometry. If you don’t understand at least the basics, it’s hard to analyze your work critically and find areas where you can improve. This material is in Chapter 4, “Photometry Fundamentals.” In “Photometric Reductions,” Chapter 5, you’ll learn some of the theory and actual steps of reducing your raw magnitudes to a standard system. Chapter 7, “Telescopes and Cameras,” takes a look at telescope and CCD camera basics. There are some important decisions to make if you don’t already have one or both and some good advice even for those who do. The right equipment doesn’t help if you don’t have the software to use it. Most CCD cameras come with camera control software and are good for gathering images. However, if you want to be able to sleep while your telescope works, you need to learn a bit about automation and the capabilities of various packages in that regard. Also, there are specific points to consider when it comes to photometry software. In “Imaging and Photometry Software,” Chapter 8, I’ll cover some things you should keep in mind when making your software purchases or expanding your current operation. You start the actual work of acquiring data with Chapter 9, “Collecting Photons,” where you’ll learn the essentials for when working the telescope and camera. That chapter also covers measuring images and reducing to standard magnitudes. This chapter and Chapter 4 differ from the first edition most of all. New approaches have been developed since the first edition for reducing instrumental magnitudes to a standard system. I believe you’ll find these more straightforward and easier to master. Furthermore, they simplify the routines you’ll follow at the telescope so that you can spend more time working your target. “Analyzing the Data,” Chapter 10, is short but important in that you learn how to look at the data to assess its quality. Bad data makes for bad results. Once you’re sure of the data, then you can analyze the asteroid or variable star lightcurve. The basics of “Period Analysis” are covered in Chapter 11. You’ll soon learn that most asteroids and many binary stars are not well behaved and don’t produce simple curves. Getting a period out of the data can be as difficult as squeezing the proverbial blood of out of a turnip. However, with practice and skill, you’ll soon be proficient. In Chapter 12, “Building Star Systems,” you’ll see how that simple lightcurve can be converted into a model of a binary star. There are many tricks to this process, the most important being to understand how changing just one parameter of a system changes the theoretical lightcurve. When you understand those parameter
Targets of Opportunity – Asteroids
5
changes versus effects, matching the theoretical curve to your curve becomes much easier. The book closes with “Publishing Your Data and Results,” which gives information about publishing your data. This is a step that’s every bit as important as gathering the data in the first place. In the appendices there are detailed examples of reducing to standard magnitudes, using Microsoft Excel®, star charts for both Landolt primary and Henden secondary fields which are used for finding system transforms. Something new this time around is a set of data for blue–red pairs from the Hipparcos and Sloan Digitial Sky Catalogs that you can use for first- and second-order extinction and, in some cases, transforms. Space limitations do not allow charts for these fields, but with the excellent planetarium programs available that should be no obstacle. As you can see, there’s quite a bit of ground to cover. While I believe this book truly is a practical guide to photometry and lightcurve analysis, it is by no means the final word. I urge you to find a copy of at least the Henden/Kaitchuck book and wear out its pages almost as much as I have over the years. Steve Howell’s book, Handbook of CCD Astronomy, is also a good read as is Bill Romanishin’s primer used for his college courses. The more you know about the process and how things can go wrong (and right), the better a photometrist you’ll be. However, never forget that you can do excellent work without knowing everything and that you don’t need to know everything before you start. Gather some photons, work some easy targets at first, and before you know it, you’ll have your own gallery of lightcurves.
Chapter 2
Targets of Opportunity The fields of astronomy and astrophysics are evolving so rapidly that it’s nearly impossible to keep up. What a textbook presents as current knowledge can be outdated in less than a year, even a few months. The same problem exists when trying to list targets of lightcurve opportunities. Theories and discoveries regarding asteroids have changed somewhat dramatically since the first edition of the Practical Guide and so some items have been dropped from the original list only to be replaced by others or at least take on a different focus or priority. Variable stars are no less different and new finds require, in some cases, prolonged and/or constant monitoring of some targets. Chasing after outbursts in cataclysmic variables or sudden, unexpected behavior in an eclipsing binary can easily occupy your observing schedule. Don’t feel compelled to work only one type of object, unless that’s what really interests you. In my case, I work asteroids almost exclusively. Others will split their observing time among asteroids, variables of one type or another, and maybe make an occasional foray into supernovae hunting. Again, there is no shortage of things to do.
2.1
Asteroids
There is much more than the rotation rate of an asteroid to be determined by getting its lightcurve. With the data used to make a lightcurve, you can help reveal not only the secrets of the asteroid but also the formation of the solar system. Here are just some of the things that can be done with or learned from asteroid lightcurves. 2.1.1
Determine the Size and Shape of Asteroids
The inversion of lightcurves into a shape and/or pole orientation (spin axis) is a very complex process that for many years did not produce the best results. That has changed in recent years with work by such astronomers as Mikko Kaasalainen and Steve Slivan. What’s important here is that one or two lightcurves will not do. In order to get the best results, lightcurves must be obtained when the asteroid is at significantly different aspect angles (or viewing angles). This is not for just one apparition, i.e., the period of a few weeks or months spanning opposition or brightest appearance, but for several different apparitions. With each apparition, the spin 7
8
Targets of Opportunity – Asteroids
axis of the asteroid likely has a different angle from the line of sight to the observer. Imagine looking at a spinning potato with the spin axis at right angles to the line of sight. Then imagine how the lightcurve might look. As the potato rotates on its axis, you see two maximums – when looking at the two broadsides of the potato, — and two minimums – when looking down the length at either end of the potato. Now imagine what you would see when looking at the potato from the top, when the spin axis is on the line of sight. In this case, you see very little, if any, variations. Now, imagine the curve if the axis was pointing somewhere between these two extremes or if the potato was partly peeled. Go further and try to imagine the curve if the asteroid is shaped more like a dog-bone, or a three- or foursided pyramid, or a highly irregular chunk created by the collision between two asteroids. By using the refined inversion process, the researcher can use the changing amplitude of the curve for each apparition and determine the pole orientation. With enough curves, the shape of the asteroid can be found. Often five to seven curves are just enough. Of course, there is a point of diminishing returns but that is rarely reached since many lightcurve observers make the mistake of assuming that once the period is determined yet another curve has no significant value. On an historical note, the famous astronomer Henry Norris Russel “proved” in a 1906 paper that it was impossible to determine the shape or albedo distribution (map of the light and dark areas) of an asteroid from its lightcurve. Russell argued that even a cigar-shaped asteroid could be “painted” so that it was faintest when seen broadside. This paper set back asteroid lightcurve studies for nearly 75 years, until some more careful analysis and research showed that asteroids, through the collisional erosion process, paint themselves to an almost uniform gray. There can be albedo differences from one asteroid to the next, but to a very good approximation, each asteroid can be considered uniform in color and brightness. 2.1.2
Search for Binary or Other Unusual Asteroids
Not so long ago the thought that an asteroid could have a satellite was unimaginable. Now, several dozen asteroids are known to be binaries. Of particular interest is that about 15–20% of NEO (near-earth object) asteroids are thought to be binaries. How did such small bodies, generally <10 km, come to have satellites that are usually between 20–50% the size of the parent? Recent discoveries of binaries among the Hungaria family and Vestoids are starting to show some interesting results. Foremost among them is that the spin rates and binary population are similar to those of the NEO population. This seems to contradict previous theories that had the spin rates of NEOs affected to a large degree by planetary perturbations (NEOs cross the orbits of Mars and/or Earth). If both planet-crossing and non-planet-crossing asteroids have similar spin rates and binary populations, then some other force may be at work. The YORP effect is a primary candidate. How-
Targets of Opportunity – Asteroids
9
ever, the sampling of the Hungaria spin rates stands at about 1% of the known population (mid-2005), which is hardly sufficient to establish the facts with certainty. It’s particularly important when you’re working asteroids that are potentially binary that you not give up too soon. It hasn’t been that long since binary asteroids were first found and observers started looking for clues in the lightcurves. Before that time, it’s very possible that observers working asteroids that had the right size and period (<10 km, 2–5 h) to make them binary candidates obtained data for only two or three nights before moving on. Had they gone just a little longer, they might have found that the asteroid actually was a binary by seeing an unexpected deviation from the curve, usually a dimming of 0.01–0.03 m. This doesn’t mean that you sit on an asteroid for weeks, but that from time to time you work a likely candidate an extra day or two. If nothing else, the precision and accuracy of the period will be improved. 2.1.3
Find the Correlation Between Rotation Rate and Size
There is a strong barrier against rotation rates of less than about 2.25 hours, even among very small asteroids. This is the rate below which centrifugal force would cause a loose conglomerate of rocks to fly apart against its own self-gravity. This lends further support to the rubble pile structure of even small asteroids. On the other hand, a number of asteroids have been found with periods considerably less than 2.25 hours, some on the order of a few minutes. These must be monolithic rocks and not rubble piles or they would fly apart. 2.1.4
Determine the H and G Values of Asteroids
The H and G values are often found in lists of asteroid elements. What are they and why are they important? The H value is the absolute magnitude of the asteroid. This is the brightness of an asteroid at a distance of 1 astronomical unit (AU) from both the Earth and Sun and at 0° phase angle. There is a direct, though complex, correlation between H and the size of the asteroid, one that accounts for the albedo, which is the ratio of sunlight reflected versus received. Having an accurate H value gives the approximate size. This helps establish the correlation between size and rotation rate. It also helps establish the size against taxonomic class, orbit parameters, and other factors. This is all information that’s needed to develop theories on the evolution of the asteroid belt and, by extension, the solar system. It’s also critical for developing plans to deal with the threat of an asteroid hitting the Earth. Believe me, it’s not so simple as sending a crew of oil well workers to blast the thing into a million pieces (a number of which would still be on a collision course to Earth). If simple geometry were all that was needed to predict the brightness of an asteroid, the H value along with Sun and Earth distances would be sufficient to make that calculation. However, there is a dependency on the phase angle of the
10 Targets of Opportunity – Asteroids
asteroid, which is described by the value G. Sometimes called the slope parameter, G describes the brightness of the asteroid based on phase angles from 0° (opposition) up to about 120°. The term slope parameter comes from the fact the magnitude-phase relationship is nearly linear and so a plot is a line with a constant slope. That linearity breaks down when the asteroid is within about 7° of opposition and the opposition effect comes into play. The opposition effect causes asteroids (and the moon, too) to be brighter at small phase angles. In general, the cause of the effect deals with the way light is reflected and scattered when the source of illumination is nearly perpendicular to the illuminated surface. So, how does getting a lightcurve help determine these values? First you must find the average brightness of the asteroid, corrected to standard Earth and Sun distances of 1 AU each by applying the formula VR = V – 5*log10 (∆*R)
(2.1)
where V is the measured average magnitude, while ∆ represents the distance from Earth and R the distance from the Sun in astronomical units (AU). If you simply take a single-magnitude reading periodically, you won’t be accounting for the asteroid’s brightness variations due to rotation, and so each value might above or below the average brightness. By having an accurate lightcurve period and amplitude, you can adjust each magnitude appropriately and get the asteroid’s average brightness for that time. A number of distance-reduced average magnitudes plotted against phase angle should produce a linear regression solution (except for about ±7° of opposition). G is the slope of the line from that solution. The value of G is then used to reduce the magnitude of the asteroid to 0° phase angle. This gives the value of H. The H and G values are measured in the standard Johnson V band. This means that if you plan to determine these values in addition to the period, you must have at least some measurements that are in or can be reduced to this band. You can read more about this type of work in Richard Binzel’s chapter in Solar System Photometry (see the Bibliography). 2.1.5
Assist Radar Observations of Asteroids
Astronomers use high-powered radar systems, such as the Arecibo radio telescope in Puerto Rico, to bounce signals off nearby asteroids. A considerable amount of information about the asteroid can be determined by measuring what happens to the frequency of the returned signal (changed by the Doppler effect) and the time it takes the signal to go to the asteroid and return. Some very amazing “images” have been generated as a result of radar observations. These are not true images in the usual sense of looking through an optical telescope. Instead they represent fre-
Targets of Opportunity – Asteroids 11
quency shift and distance. With a little imagination, one can “see” the shape of the asteroid and determine whether or not it has a satellite. Ground-based lightcurve observations can help establish limiting parameters before radar observations begin, allowing the astronomers to determine if the radar observations have a reasonable chance of providing useful data. For example, if the asteroid is rotating too slowly, the frequency shift caused by rotation may be difficult to detect since it is so small. Lightcurve observations obtained at the same time as the radar observations are also used to help constrain the results found by the radar observations. 2.1.6
Remove Observational Biases
There are several biases in the current sample of asteroid lightcurves. All of these skew the results of studies using rotation rates and may lead to inaccurate conclusions about the formation of the asteroid system. Bright and Large It’s easy to work the bright, and by inference – larger – asteroids. There’s no harm in doing that if the work you’re doing is to help make the shape and/or pole determination of the asteroid. What’s very much missing from the data pool is the smaller (fainter and/or more distant) asteroids. As noted above, there appears to be a barrier regarding rotation rate and size. There may be other barriers or plateaus not yet recognized because there is insufficient data. Working faint targets means noisier data. As shown in Fig. 2.1, the data for each individual run on the asteroid was fairly noisy, and it would have been difficult to establish good lightcurve parameters based on just one or two nights. However, by getting a number of runs, the noise averaged out and the period analysis code was able to determine the period with a high degree of precision.
Figure 2.1 Beating down the noise. If a target is faint, such that the data in the lightcurve is noisy, it’s still possible to get a reasonable lightcurve.
12 Targets of Opportunity – Asteroids
Close to Home Somewhat tied to bright and large is the bias against asteroids outside the main belt. Near-earth objects (NEOs) are covered in great detail as they go zipping past Earth. This is understandable as these objects present a potential impact danger. Main belt objects, orbiting between Mars and Jupiter, are easy to find and follow. The larger ones are well known and studied. However, there are classes of asteroids that require extra attention. Among these are high-inclination objects, Mars crossers (asteroids that come between the orbit of Mars and the Earth), and Trojans and Centaurs. Trojans are asteroids that are in the same orbit as a planet (usually Jupiter but also Mars and other planets) but are either 60° ahead or behind the planet itself. The Centaurs are objects that have orbits that are outside Jupiter and inside Neptune. Being in a special class, these asteroids represent something unique, or at least not as common, in the overall asteroid family. Getting more information on these classes, including lightcurves, builds the data pool from which researchers can develop theories on the evolutionary processes that created the class. Slow Rotators There are asteroids on which a workday would seem interminable. For example, there’s (288) Glauke where the time for a single rotation is about two months. When an observer first works one of these targets and sees no or very little change in the curve over a few hours, the temptation is to move on to something more easily determined. The instant gratification is missing. Working one of the slow rotators is more difficult because it means sticking with the object for long periods of time and almost demands the necessity of reducing to standard magnitudes because you must use a number of comparison stars in differential photometry. However, the need to discover and then obtain good lightcurve data on these targets is critical for developing theories that fully explain their origin and evolution.
2.2
Variable Stars
It’s probably much easier to convince a newcomer to do lightcurves of variable stars. The thought of Algol and other famous variables often comes to mind and there are many easy targets where the lightcurves are decidedly regular and not overly complex. After working an asteroid where adding another run of data seems only to make finding a solution less likely, it’s often a pleasure to work one of the regular variables with its very predictable curve. What follows is brief description of some of the types of variable stars and what can be learned from the lightcurve of each. It is by no means an exhaustive discussion. There are whole books on the subject. Though a bit dated and hard to find, I particularly like J.S. Glasby’s Variable Stars. The Bibliography has a listing of recommended books.
Targets of Opportunity – Variable Stars
2.2.1
13
What’s in a Name?
You’ll rarely see a variable referred to by a common name. “Algol” and “Mira” are far more exceptions than the rule. Nor do you often see the designation as a Greek letter, e.g., Beta Lyrae. Instead, you’ll see things such as “W UMa” or “V344 Sagittarii.” A look back shows how this came about. The first variable found in a constellation was given the designation “R”. The second variable was named “S”, and so on. After “Z”, the sequence started again but with two letters, i.e., RR was the tenth variable in the constellation, followed by RS, RT, and so on. That wasn’t enough for some constellations, especially those containing the Milky Way, and so the next round went SS through SZ, then TT through TZ, and continued until ZZ was reached. This allowed for 54 variable stars in a constellation. That still wasn’t enough. To extend the sequence, astronomers went back to the beginning and used AA through AZ, then BB through BZ, and so on. Note that no designation has the second letter earlier than the first. You won’t find a CA or QF anything. The end of the road was reached when a star was given the designation QZ (RR would be next but that was taken a long ways back). That still wasn’t enough. Constellations such as Sagittarius, Aquila, and Cygnus have many more than the 334 variables allowed by the sequence so far. A little practicality came into the picture at this point: The 335th variable in a constellation was called “V335.” That’s very simple and you might wonder why V1 wasn’t the first variable. Astronomers, like everyone else, are not always logical. Appending the official three-letter designation of the constellation in which the variable is found completed the name. For example, AND is for Andromeda and UMa stands for Ursa Major. You’ll find a complete list of constellation names and three-letter designations in one of the appendices. 2.2.2
Eclipsing Binaries (Extrinsic Variables)
Eclipsing binaries are made of two stars moving in orbits about a common center of mass. They are also called extrinsic variables because, for the most part, their variability is not caused by actual changes to either of the stars. Instead, the cause is the result of each star’s hiding all or part of the other as the two move in their orbits. Of course, there are exceptions and sometimes one or both of the stars is variable on its own. Those are left for additional reading. There are several classes of eclipsing binaries, the Algol-type being one. This is where there are two, well-separated stars with one star usually much hotter and smaller than the other. The lightcurves often show significantly different amplitudes for the two minimums, the deeper one being the primary eclipse and the result of the hotter, brighter star being eclipsed by the other star. Fig. 2.2 shows one example of an Algol-type lightcurve. In many of these, the secondary minimum – at the middle of the plot – is deeper and also has a flat bottom.
14 Targets of Opportunity – Variable Stars
Figure 2.2 Algol-type lightcurve. The primary star of the system is small enough such that there is a total eclipse of the primary. This causes the flat portions at the deep minimums and the sharp secondary minimum.
Figure 2.3 A close binary lightcurve. This type of curve is produced when the stars are similar in size, very close to or in contact with one another, and the orbital inclination is such that neither star is completely hidden.
There are two other common types of eclipsing binaries, the Beta Lyrae and W UMa. Both are characterized by the two stars being extremely close to one another, the main differences being how close and the spectral type of the stars. The Beta Lyrae stars are close to one another but not in contact. Being so close, the shapes of the two stars are highly distorted, almost like two eggs. They are early main sequence stars, i.e., usually B or A. Due to tidal effects, the two stars are usually in synchronous rotation, meaning the same side of each star faces the other, but this is not always the case. The lightcurve shows a continuous variation with no flat portions in the curve and there is usually a distinct difference in amplitude between the two minimums. From many studies, it appears that the secondary star is the less massive of the two and is most likely a sub-giant star. The masses and diameters of both stars are generally much larger than the Sun.
Targets of Opportunity – Variable Stars
15
The W UMa group differs in that one or both stars have expanded beyond their Roche limits. If both stars overfill the limit, the system is a contact or overcontact binary. If only one of the two stars fills its Roche limit, then the system is a semi-detached binary. The stars in a W UMa system are generally of type F or later. The lightcurve periods are well under a day, sometimes less than one-quarter of a day, and have almost equal depths for the primary and secondary minima. The relative depths of the two minima can change over time, and even from one cycle to the next. This makes it extremely difficult to determine a model for the star from the lightcurve alone. Fig. 2.3 shows a typical close binary system lightcurve. The spectral type for this particular system indicates that it is probably a W UMa type, despite the fact the depths of the minima are not quite equal. Modeling of a system based on this curve and the assumed temperatures also support favoring that classification.
2.3
Eclipsing Binary Lightcurve Characteristics
The lightcurve of an eclipsing variable can tell quite a bit about the system, besides the obvious of the period of revolution. 2.3.1
Orbit Shape (Eccentricity)
Assume the inclination of a binary system is 90°. If the orbit is circular, the primary and secondary minimums are equally spaced, meaning the time from the primary to secondary minimum is the same as the secondary to primary. Fig. 2.2 shows such a case.
Figure 2.4 The effect of an eccentric orbit on the lightcurve of an eclipsing binary. If the orbit of the binary system is not circular, i.e., eccentricity is >0, then the position of the secondary minimum is not exactly half the distance between two successive primary minima. The amount of shift indicates the degree of eccentricity. The shift is also dependent on the orientation of the orbit to the line of sight. See the text for a more detailed explanation.
16 Targets of Opportunity – Variable Stars
If the orbit is not circular, then the minima may not be equally spaced. This follows Kepler’s law that says an object moves faster when near the center of mass and slower when farthest away. The angle between the semi-major axis of the orbit and the line of sight affects how the curve is changed due to an eccentric orbit. If the line of sight down the semi-major axis (the longitude of periastron is 90° or 270°), then the minima will be equally spaced but they will be of different durations. If we’re looking broadside to the semi-major axis, then the duration of each eclipse would be the same and the curve in would resemble Fig. 2.4. 2.3.2
Orbital Inclination
Imagine a system with one star larger than the other. If the inclination of the orbit is 90°, such that we’re seeing it edge on, then during primary minimum, the primary star is totally eclipsed by the larger star. Half an orbit later, when the primary goes in front of the secondary, the eclipse is not total. If the secondary is significantly larger than the primary, e.g., Zeta Aurigae where the secondary is about 300 times larger than the primary, it’s better to call the event a transit instead of annular eclipse. In this situation, the primary minimum is flat for the time that the primary is eclipsed. The secondary minimum shows a flat portion as well, though the depth of the minimum is usually much less that the primary minimum. If the inclination is not 90°, then it’s possible that neither star is completely covered, in which case, there is no flat portion to the curve at either minimum. In addition, the secondary minimum is sometimes very small and would be undetectable were it not for photoelectric or CCD photometry. 2.3.3
Reflection Effect
Figure 2.5 Lightcurve of TW Crv, which is dominated by the reflection effect. Courtesy of Dirk Terrell, after data from Chen A., O'Donoghue R.S., Stobie R.S., Kilkenny D., Roberts G., van Wyk F. 1995, “EC11575-1845: a New close binary with a large reflection effect discovered by the Edinburgh-Cape Survey,” Mon. Not. Roy. Astron. Soc., 275, 100.
Targets of Opportunity – Variable Stars
17
Note in Fig. 2.2 that after the primary eclipse is over, the curve is not flat from that point to the beginning of the secondary minimum. This is due to what’s called the reflection effect. It’s really not a reflection of the primary’s light off the surface of the secondary that causes this but the extra heating of the side of the secondary closest to the primary. The additional heating makes that side a little brighter, which causes the slight rise leading up to and the parallel decline following the secondary minimum in a curve such as in Fig. 2.2. What you see in Fig. 2.5 is a case of reflection effect taken to extremes. 2.3.4
Limb Darkening
If you’ve seen images of the Sun you’ve probably noticed that the edge of the disc appears fainter than the center. This is because when you’re looking at the center of the disc, you’re looking into the deeper and hotter portions of the solar atmosphere. When you look at the limb, you’re looking at the higher, and cooler, layers of the immediate solar atmosphere. This effect is seen in a lightcurve as a rounding of the bottom of a minimum, where the “shoulders” going into and out of the primary minimum. Instead of being sharp boundaries they are rounded a bit. The overall shape of the primary minimum changes as well, as you’ll see later in the analysis section of the book.
2.4
Cataclysmic Variables
Cataclysmic variables are among the most interesting stars you can observe. They often lie dormant, but sometimes, literally in a burst of excitement, they suddenly brighten to many times their normal state. These outbursts are somewhat predictable but never with great certainty. This means getting many images that show nothing unusual but, if you’re lucky, some that catch the beginning or at least part of an outburst. CVs are binary stars. One is a normal star, usually of type F or G – much like our Sun. This star is usually the secondary star of the system. The primary is a white dwarf, a very hot and dense compact star. The two stars usually orbit one another in from one to ten hours and occupy a total space not much larger than the Earth–Moon system. Material from the cooler star is pulled towards the dwarf at tremendous speeds and with enough energy to create massive amounts of x-rays during the accretion process. This action results in a disc of material around the primary star (an accretion disc). There are actually several subclasses to CVs. The common trait is that they display outbursts of activity but of varying frequency and amounts.
18 Targets of Opportunity – Variable Stars
2.5
Cepheids
The Cepheids are named after the prototype, Delta Cephei, and are some of the most important stars in the history of astronomy. Many years ago, it was determined that there is a very strong correlation between the period of a Cepheid star and its actual luminosity. By finding the period of a Cepheid, one could then derive its distance by comparing its actual magnitude versus its absolute magnitude. It was later determined that there are two types of Cepheids (to be absolutely correct, of Classical Cepheids), each with its own period–luminosity relationship. Once this was discovered, the “size” of the universe nearly doubled as distances were modified to match the new information. Cepheids are like balloons, expanding and contracting at very precise and regular intervals. As they change size they also change spectral class, i.e., they change color. What’s more is that the stars are not at their brightest when also at their smallest. Instead, they are brightest about a quarter of a cycle after minimum size. This paradox is solved if you keep in mind that surface area has a considerable effect on total luminosity. While the star is a bit cooler after minimum size, its larger size makes up the difference.
2.6
Long Period (Mira) Variables
The first Long Period Variable (LPV) to be discovered was Mira (Omicron Ceti). Bayer noted it in his catalog as being fifth magnitude in the early seventeenth century, while Fabricius cataloged it as second magnitude in 1596. Holwarda finally confirmed the period of about 330 days in 1638, showing that the star varied from second to tenth magnitude. LPVs do not have rigidly fixed periods, varying by as much as 10%–15% from one cycle to the next. There is also no real consistency to the shape of the curve, even for the same star. Visual observers often follow these types of stars since they change slowly and have amplitudes on the order of several magnitudes. These are not particularly “hot targets” for photometric work because of their long periods. Following a star for 330 days is not always practical and really requires using filtered observations so that the data can be accurately reduced. However, with the automation equipment and software available these days, it would not be difficult to take measurements of some LPVs every week or so as you concentrate on other targets. Of particular interest is to follow the stars in V and then R or I since they are much brighter in the redder regions.
2.7
Semi-Regular Variables
These stars can’t make up their minds. The have a strong tendency towards being predictable but then often go astray. What periods can be assigned to them range from 20 to 1,000 days. The amplitude of the lightcurves is much less than for the
Targets of Opportunity – Variable Stars
19
long period variables, being more on the order of 2.5 magnitudes or less. The group, generally known as SR, is usually broken into several groups, SRA through SRD, and probably more, depending on who’s doing the classification. The SRA stars have periods longer than 35 days, while the SRBs have less well-defined periods starting at 20 days. Again, the term period is used loosely since these red giant and supergiant stars do not show the higher degree of regularly of even the not-so-regular long-period variables.
2.8
Other Targets
2.8.1
Extra-Solar Planets
As of mid-2005, more than 150 planets circling a star other than ours were known and the number is constantly growing. There are two main approaches to finding extra-solar planets. The first involves carefully monitoring the position of the star and noting any “wobble” in the star’s position. The slight back and forth change in position is caused by one or more planets having sufficient mass so that the center of mass of the extra-solar system is measurably different from the physical center of the star. As the planets and sun move about the common center of mass, the star’s position shifts slightly. The amount of the wobble and any periodicity in that wobble help determine the number and masses of any planets. The second approach is the one of most interest here. A planet of sufficient size crossing in front of the star will cause the star’s light to dim, sometimes by only 0.01–0.02 m, even less. This may seem very small, but with careful work, these transits can be easily observed. Most stellar candidates for monitoring are bright and require short exposures, sometimes requiring a filter to prevent saturating the CCD chip and avoid scintillation noise (see “Seeing and Scintillation” on page 35). Fig. 2.6 shows data of an extra-solar transit collected by California amateur Robert Stephens, whose observatory is deep within the light pollution of the Los Angeles area. This points to one of the great advantages of CCDs, namely, the ability to get useful data even under less than perfect conditions such as heavy light pollution. 2.8.2
Novae and Supernovae
Many novae are actually binary stars, and so you could put them in that category. However, novae often provide a much more dramatic rise and fall and catching one on the upswing and then following it back to its quiescent state can provide valuable information. Should you have the equipment to monitor a supernova, your data can be of even greater value. Usually professionals will jump right on a
20 Targets of Opportunity – Variable Stars
supernova but it’s not uncommon for amateurs to be the first with initial data since weather and scheduling can conspire against the professionals.
Figure 2.6 The lightcurve of a planet crossing the face of its parent star. The planet has a diameter is about 2.6X Jupiter and a temperature of about 1,000K. Robert D. Stephens, Santana Observatory.
Many amateurs take part in supernovae searches, taking images of galaxies in hopes of finding a new star within the galaxy. The challenge for the more industrious is to do the comparisons in real time and have the system alert the observer if something is found or – better yet – automatically start a series of images to get as much data on the supernova as possible.
2.9
Summary
Any one of the projects mentioned above can provide enough work to keep any individual or even a group of observers busy for years. Mix and match these or other projects as you like. The main thing is that you will be doing very real science with your telescope and camera.
Chapter 3
Photometry Fundamentals This is not going to be a complete technical discussion on filters and photometry, but I do hope to give you all the information you need to start and execute a solid lightcurve program. For those technical details, I strongly recommend the latest edition of the Henden–Kaitchuck book on photometry. Bill Romanishin’s book on CCD photometry is also an excellent source. I also urge you to read some of the books on basic image processing, particularly on creating and using darks, flats, and bias frames. Without those correction frames, it’s almost impossible to do high quality photometry. See the Bibliography for a listing of recommended reading. While you can do photometry without knowing the intricate details of the underlying theory, you will be working in a bit of a vacuum. This doesn’t mean you have to be able to sit down and derive the formulae for transforms off the top of your head. However, if you do understand the basic relations between color differences, air mass, and the factors that affect precision and accuracy, then you’ll have a better chance of doing higher-quality work.
3.1
A Little Bit of History
The history of estimating the brightness of stars goes back to days of antiquity. At that time, the brightest stars were given a magnitude of 1 and the faintest stars a magnitude of 6. The values for stars in between the limits were guessed at, often quite differently from one observer or catalog to the next. This problem was made worse with the invention and use of the astronomical telescope, which revealed stars much fainter than the human eye could see. The process of estimating values became even more arbitrary, with some stars varying by up to three magnitudes among the catalogs. In the 1700s Edmund Halley recognized that first-magnitude stars are about 100 times brighter than sixth-magnitude stars. However, it was not until English astronomer Norman Pogson proposed his system that some order was brought to the magnitude scale. In the Pogson magnitude scale, a difference of exactly 5.0 magnitudes corresponds to a ratio in brightness of exactly 100.0. Like the response of the human eye, this scale is logarithmic and so the ratio of brightness from one magnitude to the next is 2.5118 or, more exactly, 100.4. In general, ∆mag = –2.5 * log(B1/B2)
B1, B2 = brightness of Objects1 and 2
(3.1) 21
22 Photometry Fundamentals
In the context of this book, B1 and B2 will be the electron counts for each object as derived from pixel values. For example, if Star1 has a count of 10,000 and Star2 has a count of 8,000, then DeltaMag = –2.5 * log10(10000/8000) = –0.24 Why the minus sign in front of 2.5? It accounts for the fact that the magnitude system is “upside down” and so more negative numbers represent brighter stars. What the result tells us is that the brighter star has a magnitude 0.24 less than the fainter star. This is often confusing for the beginner, especially when describing a range of values. For example, when someone says that he observes objects 14th magnitude or less, does he mean brighter or fainter than 14th magnitude? When in doubt and it is pratical, use “brighter than” and “fainter than” to avoid confusion. We now have a system that equates a specific ratio of brightness to a specific magnitude value. What’s missing is the “zero-point,” which is a value added to the differential value in order to get the true magnitude of each star. For example, if we know the true magnitude of the faint star is 14.24, then we know that the brighter star is 14.00. Once you establish this zero-point, your data can be converted from differential values to absolute values.
3.2
The First Color-Based Systems
If you’ve seen photographs of the same region of the sky taken through different filters, you know that some stars appear brighter in one color than in the others. Some nebulae are brighter in red light, while many galaxies are brighter in bluer light. Astronomers noted a long time ago that many variable stars didn’t change brightness the same amount when using one filter versus another. Which colors and the amount of the difference between them can be used to estimate the temperature and spectral classification of a star. The earliest attempts at a magnitude system that accounted for color used the terms visual and photographic. The latter favored blue light and was based on measuring stars on photographic film (thus the term photographic). As the science of producing photographic materials advanced, the response of different types of film was fitted to different spectral regions. One of those was in the green region, where the human eye is the most sensitive. The term photovisual was created to distinguish between a magnitude determined by visual observations versus a magnitude derived from measuring a plate of film favoring the green region. For many years, magnitudes were based on photographic and photovisual values. When film with a high sensitivity toward the red end of the visual spectrum was produced, a new system of Blue–Red was devised. However, two colors alone are not always enough for a thorough study of astronomical targets. The creation of the modern multi-color magnitudes systems was ushered in by the development of the photomultiplier tube in the 1930s.
Photometry Fundamentals
23
Actually, photoelectric photometry existed before then, but the cells, raw cousins of the simple photocells you can get today, were not very sensitive. The photomultiplier tube (PMT) changed the way astronomy was done. This device receives photons that strike a charged plate, which – because of the photoelectric effect – generates electrons. The electrons are attracted by another charged plate, which generates even more electrons. The “cascade” continues for up to ten stages, by which time about two million electrons exist for each electron created by a photon striking the first plate. Over time, just as with film, PMTs have been tailored to favor certain spectral regions and, also as with film, the use of PMTs required that a color system be devised so that measurements among observers could be accurately compared.
3.3
The Johnson–Cousins Standard
In the 1950s, H.L. Johnson and W.W. Morgan devised what would become the most commonly used photometric standard to date. Using the 1P21 PMT and a carefully constructed set of filters, they created the UBV system. This system defines three broad pass bands. The U band favors the UV region with a peak of about 3500 Å. The B band is centered on 4400 Å and the V band is centered at about 5500 Å. See Fig. 3.1. The reasons for selecting the regions for filters were not arbitrary, though some astronomers disagreed, and still do, with those reasons. The V filter approximates the photovisual magnitude system. The 1P21 and not the filter defines the cutoff in the red region. The B filter approximates the original photographic (blue) system. The filter for this band is actually consists of two filters. One is used to block light from shorter wavelengths and the Balmer discontinuity. The U band extends to shorter wavelengths than B. The original filter that defined the system actually passes a fair amount of red light. This red-leak must be filtered by a second filter in combination with the U filter or it must be measured independently and then subtracted from the U readings.
Figure 3.1 The Johnson UBV standard. The plot shows the response in the three colors of the UBV system and that of the human eye.
24 Photometry Fundamentals
A significant problem with the U filter is that neither it nor the 1P21 defines the lower end of the pass band. Instead, it is the Earth’s atmosphere. The altitude of the observatory and other conditions determine the amount of UV light cut off by the atmosphere. Getting a good match to the original U band in the UBV system can be very difficult. For most amateurs, it is a moot point. The commonly used CCD chips have very low efficiency in the U region and so require very long exposures to get a sufficient signal-to-noise ratio.
3.4
Setting the Standard
Defining a color system also means defining values in each color for a number of stars, what are called standard stars. If you measure those standard stars, you can then use the data to derive what are called transforms, which is a set of equations that equate raw instrumental magnitudes on your system to the standard system. The original Johnson system defined only ten standard stars. This was not enough for other observatories to transform to the system. Johnson and Morgan created a more extensive list of secondary standards that were closely tied to the original ten stars. Unfortunately, many of these stars are brighter than 6th magnitude. With a scope of any size, these quickly reach saturation levels on a CCD.
3.5
Seeing Red
The original UBV system was put to extensive use but in addition to any other shortcomings, the system did not reach into the red or infrared. This was changed when Johnson later extended the UBV system to include R and I. The R band was centered on 6700 Å while the I band was centered on 8000 Å. In 1973 Cousins redefined the R and I bands using a different, more efficient PMT and filters. The centers for the two bands were 6500 Å and 8000 Å. The R and I magnitudes of the two systems are not the same, with the Cousins system now being the preferred one. You’ll often see Rc and Ic, which means Cousins RI magnitudes.
3.6
CCDs and Standard Magnitudes
What should be apparent at this point is that the detector used to gather photons defines the system as much as the filters. If CCDs and the 1P21 had approximately the same spectral response, then the same filters used for traditional photoelectric photometry could be used on CCDs and the transformation from an instrumental system to a standard system would be fairly easy. This is not the case. The CCD chip is more sensitive in the red and much less in the blue, even when using a blue-enhanced chip (the “E” you often see in camera models, e.g., ST-9E). Converting your measured values to the standard system using the original filters would be very difficult, if not impossible.
Photometry Fundamentals
25
Figure 3.2 The responses of the Bessell CCD filters. The shapes were made to approximate closely those of the original Johnson–Cousins system when using a typical CCD camera with extended red response.
In a 1990 paper, Bessell derived a new set of filters that allowed most CCDs to transform values to the Johnson–Cousins system (see Fig. 3.2). Most filter suppliers use these specifications for the filters they sell. To be safe, you should confirm that the filters you get are for Bessell specs and Cousins Rc and Ic bands.
3.7
Landolt Standards
The final definition of the UBVRcIc standard is the result of the extensive work by Arlo Landolt. In 1973 he published a list of stars carefully transformed to the Johnson UBV system. He extended that list to include stars in the Cousins R and I bands in 1983. In 1992 he published one of the more important works in photometry. This included an extensive list of stars in the Johnson UBV and Cousins RI systems. Most observers now use those lists to transform their system to that of the Johnson–Cousins systems. In fact, if one does not use those lists, one should be prepared to provide more than the usual justification regarding the accuracy of one’s results. Landolt’s catalog includes fields near the celestial equator with magnitudes in at least the B, V, and often R bands. Being on the equator makes them readily available to observers in either hemisphere. You can download the Landolt catalog from several locations, including the Lowell Observatory site. FTP to ftp.lowell.edu/pub/bas/starcats Be sure to read the LANDOLT.NOTES file. Not all the Landolt fields are of the highest quality; some stars were measured only a few times or are variable, and should be avoided.
26 Photometry Fundamentals
At the Lowell ftp site, you’ll also find the LONEOS catalog prepared by Brian Skiff. This catalog includes most of the better Landolt stars. However, other than those stars that are marked as Landolt standards, do not use stars from the LONEOS catalog to determine the transformation values. The systematic error in the LONEOS catalog is on the order of 0.05m. This is good in many cases but not good enough for determining transformation values. However, a particular plus for the LONEOS catalog is that it does provide more recent and accurate positions for the Landolt stars.
3.8
Henden Sequences
Arne Henden has produced a number of photometric sequences that can serve as excellent substitute, near secondary, standards. The errors for the measurements are included in his files so that you can determine which stars to use. Most fields favor the northern sky, but many are available to southern observers. The appendices include finder charts for a number of the Henden fields, selected for the number of stars and low errors. The URL to download all the Henden files is also given in the appendix.
Chapter 4
The Photometry Primer This chapter will cover the details of reducing the raw instrumental magnitudes you get when measuring your images into magnitudes on a standard system. There’s a little math involved, but not too much. Also, the approach taken in this book will be a little different from the more traditional ones you’ll find elsewhere. Throughout the photometric process and into analyzing the data there is often the tendency to accept the results presented by the computer without question. Do not make this mistake. When finding the photometric transforms, extinction values, and zero-points don’t assume the derived value is correct. Often those values are determined by linear regression methods that can be easily fooled by just one bad data point. Look at a plot of the data (the software should provide one). See if there are one or more data points “out of skew.” If so, eliminate that point from the calculations and rework the numbers. There is no substitute for common sense.
4.1
Instrumental Versus Standard Magnitudes
When you measure the brightness of an object, you obtain what is called a raw instrumental magnitude. The value is determined by the formula: m = –2.5 * log(I)
(4.1)
where I is the measured total intensity of the target. For CCD images, this translates into the total of the pixel values from the star times the ADU conversion factor (gain) less the sky background. Chip/camera manufacturers often state this as e–/ADU. A common value is 2.3. For example, given an ADU value of 1000 and an ADU conversion of 2.3 e–/ADU, then m = –2.5 * log(1000 * 2.3) m = –8.404 Note that the value of 1000 is not the sum of the pixel values within a measuring aperture but is the sum contributed only by the star. The sky background must be removed before calculating the magnitude. Also, recall that as a star gets brighter, its instrumental magnitude is more negative. For example, if I = 1, then m = 0.0; and if I = 1,000,000 then m = –15.00. This is in keeping with the definition of the stellar magnitude system, where brighter stars have smaller values than fainter stars. Some objects have negative 27
28 The Photometry Primer
magnitudes, e.g., Venus at –4 and the Moon at about –13 when full. In the case of instrumental magnitudes, all objects have negative values. Again, the more negative the magnitude, the brighter the object. If you were working only differential photometry (see “Differential Photometry” on page 32), you could stop measuring and start analysis once you have determined the instrumental magnitudes, assuming your comparisons and target were similar in color and within a degree or two of each other. However, if you want to compare your results directly with those from other observers or to compare values over a large range of time, you should convert the magnitudes to a standard system.
4.2
Air Mass
Air mass is a value that reflects the amount of air through which you look to see a star. When the star is directly overhead, you’re seeing it along the shortest possible path through the Earth’s atmosphere. When you look at a star near the horizon, you’re looking through a much longer path. The longer the path, the more the star’s light is dimmed. As you probably know, blue light is scattered more than red light by the atmosphere and so less blue light is received as the viewing path gets longer. So, as a star gets lower to the horizon, it also appears redder. There are several formulae for determining air mass. The most simple is X = 1/cos(z), or X = sec(z)
(4.2)
Where z is the zenith distance, or the distance of the object from the overhead point. In other terms, z = 90° – altitude
(4.3)
When a star is directly overhead, z = 0° (altitude = 90°), which gives X = 1.00. If the star is 30° above the horizon, then X = 2.00 (1/cos(60°) = 1/0.5). In general, you should not observe below 30° altitude. To borrow from the ancient map makers, below that altitude “there be monsters.” The air mass changes rapidly below 30° altitude (60° zenith distance) and is easily affected by changes in humidity, barometric pressure, clouds, haze, and pollution. You’ll often hear that because you’re doing differential photometry, you can ignore air mass and extinction issues. This is mostly but not entirely true. There are other reasons as well to stay above 30°: 1. 2. 3. 4.
Differential extinction across the frame. Differential color extinction (second-order) across the frame. Differential refraction, which can make objects become miniature spectra and blue stars change position with respect to red stars. Increased scintillation (discussed on page 35).
The Photometry Primer 29
If at all possible, you should confine observations to at least 30° altitude and higher. Equation (5.2) is a good approximation down to about 30° altitude, but it should not be used for more critical reductions and certainly not if observations are below this altitude. Bemporad developed the most common formula in use today: X = sec(z) – 0.0018167(sec(z)–1) – 0.02875(sec(z)–1)2 – 0.0008083(sec(z)–1)3
(4.4)
In this case, z is the apparent zenith distance – not the true distance – and so you must include a correction for refraction. Keep in mind this formula is based on observations made more than 100 years ago. Even so, the formula is probably good to ±0.001X, which is more than sufficient for all but the most critical work. To calculate the zenith distance for a star at a given position in the sky, use the formula: sec(z) = (sin(φ) sin(δ) + cos(φ) cos(δ) cos(H))-1 where
4.3
φ δ H
(4.5)
observer’s latitude declination of the object hour angle (Local Sidereal Time – RA)
Extinction
Extinction is the dimming of a star’s light caused by going through the Earth’s atmosphere. It is expressed in units of magnitudes/air mass. So, as the air mass gets larger, i.e., the star is closer to the horizon, the total extinction increases. The value for extinction is not the same for all colors. Since red light is scattered less by the Earth’s atmosphere than blue light, the value for extinction is lower for red light than blue. The value for visual (green) is somewhere in between. “First-order” and “second-order” often precede “extinction”. First-order extinction is the type of extinction just described. Second-order extinction is dependent on the color of the object and the air mass at which it’s being measured. As the object moves towards the horizon, the blue portion of its light is “dimmed” more rapidly than the red portion. Therefore, to compute the true exoatmospheric magnitude of the star correctly – the magnitude as seen outside the Earth’s atmosphere – one must also determine and apply the second-order extinction. Second-order extinction is defined as units of magnitudes/air mass/unit of color index. The color index is the difference in the magnitudes of an object when measured in two colors, usually B and V. By definition, the (U–B) second-order value is always 0.0. Second-order terms involving (V–R) or (V–I) are usually considered insignificant in all but the most critical cases. Therefore, the only time you usually need to worry about second-order extinction is if you’re making measure-
30 The Photometry Primer
ments with a B or Clear filter. Note that the above applies more when doing absolute photometry. As you’ll see in Chapter 6, it’s a different story for differential photometry. 4.3.1
Finding First-Order Extinction Values
As you’ll see in “First-Order Extinctions – Are They Really Necessary?” on page 50, you can often skip finding first- and second-order terms entirely (“Heresy!” I hear some of you say, but it really is true). I admit, there are times you should find and use the extinction terms, but they occur most when doing absolute photometry, which is not covered in depth in this book. For those times when you do need to find extinction values or are curious, please read on since you need to be familiar with this basic information so that you can see why extinction terms can be ignored, even when finding transforms. The Comp Star Method In this method, you follow a group of stars in a single frame across as large a range of air mass values as possible. For example, you’ll get much better results if you follow a field from the time when it’s just above 30° in the east until its some distance past the meridian. If you follow the field for only two or three hours and the field is near the meridian the entire time, then the range of air mass values is small, which can quickly skew the results of a linear regression. The problem with the comp star method is that it requires that conditions do not change significantly throughout the time that you follow the field. For all but the better locations, usually the professional sites, this is rare. However, it is still the most commonly used method by amateurs and is effective as long as you keep an eye on the data when doing the reductions. The (Modified) Hardie Method In this method, you shoot two fields of standard stars that have a large air mass difference. For example, you would image a field about 30° above the horizon and another that’s nearly overhead, giving a separation of almost one air mass. Also, the fields are imaged with a minimum amount of time in between. This reduces errors caused by changing conditions. As you’ll see later, you don’t need to worry about using stars of similar color, though it’s good general practice to avoid stars that are extremely red or blue. The former are particularly troublesome throughout the reduction process, partly because they are often variable. Richard Miles has developed a variation of this method using blue–red star pairs culled from the Hipparcos catalog. You can read about his method in the 2005 proceedings for the annual meeting of the Society for Astronomical Science (http://www.socastrosci.org). It will not be discussed in detail here, but it is worth investigating. You will find a set of blue–red stars listed in one of the appendices.
The Photometry Primer 31
The advantage of the Modified Hardie method is that you can quickly determine the extinction values and nightly zero-points. If the process is repeated once or twice more during the night, you can determine the stability of the values or, at least, find a good average value for sets of observations sandwiched in between runs on the standard fields. 4.3.2
What Are Good Values for Extinction?
The value for first-order extinction depends on many factors but, in general, the value for V should be around 0.2 to 0.4; the higher value is for those nearer sea level – assuming only differences in atmospheric thickness. The B band extinction will be higher than V, on the order of 0.1–0.2 m higher. A check would be that k'b– k'v, also written k'bv, is about 0.15. The R and I band extinctions should be slightly less than V, around 0.02–0.08 m. A check is that k'v–k'r, also written as k'vr, is a small positive number. The second-order extinction value is usually determined only for (B–V) and is found by getting B and V images of a field with a red/blue pair of stars. Usually, second-order extinction has a small negative value, on the order of –0.04mCI-1X-1. In this case, the red star dims less quickly than the blue, which could be viewed as getting relatively brighter, and so a negative value makes sense. The second-order method in this book works a little differently, as discussed in Chapter 6.
4.4
Transforms and Nightly Zero-Points
To convert raw instrumental magnitudes to a standard system requires that you determine the values, or transforms (also transformation values). These are applied to the raw instrumental magnitude for a given star (corrected for first and second-order extinction, if necessary) so that the derived standard magnitude for the star matches its catalog value. You must determine these values even if you’re using filters that supposedly match your system to the standard band. This is for any number of reasons; e.g., the filters may not be perfectly matched, your CCD may have a slightly different response that the original specification, and so on. Part of the solution for the transforms is the zero-point. If atmospheric conditions and your system never changed, the zero-point could be determined once and for all. However, this is not case, especially if you change something in the system, and so it’s usually required to find the zero-point for each filter nightly. However, it is not necessary to compute the transform values every night. Unless you’ve made an actual change in the system such as recoating the mirrors, changing filters, etc., the transform values are usually constant enough to be used for days, if not weeks, at a time. You should confirm this assumption. Even some of the better professional sites have significant changes in the transform values on a seasonal basis.
32 The Photometry Primer
Remember that to get your data on the true Johnson–Cousins standard system, you need to use the Landolt stars (see page 25). The Henden fields can get you very close but not exactly on the standards. The appendices have finder charts for both Landolt and Henden fields.
4.5
Differential Versus All-Sky Photometry
There are two general methods for photometry. These do not deal with the way you measure the images but the general approach you use to get and reduce data. Which one you use depends on the type of program you’re running. 4.5.1
Absolute (All-Sky) Photometry
Absolute, or all-sky photometry is the technique used more often by the professionals, primarily because they have facilities located in places where the nighttime transparency is excellent and constant. If you’re not fortunate enough to live in one of the pristine locations, you can still do all-sky photometry but you may not get as many suitable nights and find getting a good match in some filters, especially B and U, a little more difficult. With all-sky photometry, the premise is that you are going to be measuring stars over a large area of the sky and so, mostly likely, over a wide range of air masses. This means that first-order extinction and nightly zero-point determinations are required and must be found with a good deal of precision and accuracy. In general, several standard stars are measured in widely varying locations around the sky. The measurements of these stars can be used to determine both the extinction values for the evening and the transformation values. These measurements are made not just at the beginning of the evening but at least once or twice during the run to assure conditions did not change dramatically and to provide additional data points for the solution. Once you have the transform and extinction values, the measurements of the target and comparison stars can be reduced to absolute magnitudes on a standard system. 4.5.2
Differential Photometry
Differential photometry means that you find the difference between a target’s magnitude and that of a comparison star, or average of several comparisons. The result is sometimes called the delta magnitude. Differential photometry is easier than all-sky photometry and provides the most accuracy when measuring small variations. That’s particularly important when the amplitude of some asteroid lightcurves is less than 0.1 m. With a modest CCD field of view, the process becomes very simple and very effective as the comparisons are often within the field with the target at all times. This means that all of the stars and targets have very similar air masses and so
The Photometry Primer 33
extinction effects all but cancel out providing the stars and the target are similar in color. If you use comparisons that are distinctly different in color, e.g., blue comparison while working a red target, you may not be able to eliminate extinction calculations entirely, especially if you are working at low altitudes. This is one reason you should avoid going below 30° altitude. You should use at least two comparison stars. This second star, the check star, is used just in case the first comparison is variable. Since the CCD field likely has a number of stars, don’t hesitate to use more than two. However many you select, the average value of the sum of the individual magnitudes is used for the single comparison value. This helps smooth out minor errors when measuring each star. Four or five is a good number as a compromise between having enough for a good smoothed average and creating additional work. Of course, if the software allows automatically measuring the stars, using a larger number won’t hurt. Another advantage of differential photometry is that you can use only the raw instrumental magnitudes and do analysis on the differential values. However, those differences are not on a standard system and not absolute magnitudes. There are ways to convert both unfiltered and filtered raw differential magnitudes to a standard system and absolute magnitudes. Those are discussed later on. If you are doing differential photometry, even if filtered, be sure to record which comparison stars you used. This allows a reduction of the data to a standard system once those comparisons are calibrated. You need to get at least a few measurements in two standard colors since a true transformation to a standard system is not possible without a color index on the target and comparisons. Many observers use only a V filter; that’s fine as long as the target doesn’t change color. In that case, a simple offset can be applied to each observer’s data – assuming that the same comparison stars are used and the filter puts the overall system reasonably close to the standard. 4.5.3
Signal-to-Noise (S/N or SNR)
Signal-to-Noise is a statistical term that defines the ratio between the useful signal (photons from the target) versus the total signal (the photons from the target, sky background, inherent noise in the chip, etc). The basic concept can be stated as (Signal+Noise) / Noise
(4.6)
All other factors being equal, the larger this ratio, the stronger the data from only the target and the more precise your measurements. A SNR of 100 means that the noise is about 1% of the total signal. Translated to magnitudes, this means that your measurements have about 0.01 m precision, not accuracy – there is a difference. Another way of expressing the SNR is by using N, the number of electrons collected,
34 The Photometry Primer
(4.7)
SNR = N
The value N is not the total of the pixel values contributed by the target. Pixel values are in ADU, analog-to-digital units. For example, a 16-bit ADU converter allows values between 0 and 65,535. To get the SNR, you must multiply the ADU sum, call it ΣADU, by the analog-to-digital conversion factor (gain) for your camera, call it GADU, which is expressed as electrons/ADU (e–/ADU), i.e., (4.8)
SNR = Σ ADU * G ADU
Computing an accurate SNR can be very complex. The actual formula contains terms that account for noise in the camera, dark current, and other factors. The software you use should be able to provide a reasonably good estimate of this value so that you can determine the quality of your data and report it. Never forget to include errors when you report your data. You can get an approximate error of any given measurement by using 1/SNR – assuming the SNR value is valid. However, this is in terms of flux and not magnitudes. In order to get the error in magnitudes, use (4.9)
SNR mag = 1.0857 / SNR flux Table 4.1 gives some examples of converting SNR to magnitudes. SNR 200 100 50
Error 0.005 0.011 0.022
SNR 25 10 5
Error 0.043 0.110 0.220
Table 4.1 Errors in magnitudes for given SNR values.
Be careful about taking these values literally by assuming that SNR alone indicates the data errors. Other errors, including systematic, influence the true error. 4.5.4
What Is an Acceptable SNR?
From Table 4.1 you can see that for an object with a small-amplitude lightcurve, say, <0.1 m, an SNR of 100 becomes important; you don’t want the noise to be a significant portion of the lightcurve. On the other hand, if the amplitude is larger, e.g., 0.2–0.5 m, you can afford a slightly noisier signal, especially if it means the difference between getting data or not. Practical experience has shown that you can still get good results when the SNR drops to 50 and even a little below, implying a precision of about 0.02 m.
The Photometry Primer 35
However, in those cases you’ll probably need to get more data so that any analysis can average the noisy data and so find a period and amplitude. This approach has been used quite often to turn what, under other circumstances, might have been marginal data at best into usable data. In short, get the most SNR you can but don’t use a hard fast rule as to what will and won’t work. I’ve worked asteroids with SNRs well over 100 that generated lightcurves with data points loosely tied to the average curve while others, with SNRs of only 20–40, generated lightcurves that were smooth and had tightly fitting data points.
4.6
Seeing and Scintillation
Seeing is a measurement of atmospheric turbulence. When there are large temperature differences between layers of air, the seeing is usually very poor, meaning – in simple terms – that the visible disc of a star is enlarged and so covers a greater number of pixels. This is because the refractive index of each layer is different and so incoming light is distorted from its parallel path before it gets to the observer. In quantitative terms, seeing is often expressed in arcseconds, meaning the size of images at full width at half-maximum (FWHM), which is the width of the star’s profile at a height equal to one-half the maximum value. At the most elite sites in the world, seeing is often at 0.5 arcseconds. Average sites usually have 2–3 arcseconds seeing. As the light from the star is spread over more pixels, the star’s profile becomes asymmetrical and dark current noise increases. The net result of all this is decreased precision and accuracy. When the seeing turns stars into “fuzzballs,” it’s best to find something else to do. Of course, that point is relative. Some observers give up on nights that others would consider superior. Scintillation is a slightly different phenomenon, caused by a star’s light arriving not as a single packet of light but several, each “imaged” by small cells in the Earth’s atmosphere. Each cell affects the star’s light differently. If you see the light from a small number of cells, e.g., using a small aperture like your eye, then the star may twinkle – noticeably change brightness in rapid fashion. A larger telescope reduces this effect because it’s averaging a larger number of cells to produce the single image at the eyepiece or camera. For any given sized instrument, the way to reduce these fluctuations is to take an exposure of sufficient length so that they average out and, more important, the error in that average reaches an acceptable level. The general recommendation is to use exposures no shorter than ten seconds, unless conditions and tests allow otherwise. You’ll find a detailed analysis seeing and scintillation in High Speed Photometry by Brian Warner (the real Brian Warner). Also take a look at http://www.aavso.org/observing/programs/ccd/airmass.shtml, which calculates scintillation errors in magnitudes for given circumstances.
36 The Photometry Primer
4.7
Matching Pixel Size to Seeing
One of the important issues when choosing a CCD camera is matching the pixel size to the focal length of the system. Often you’ll hear a rule of 2 arcseconds per pixel. This is not the best rule. Instead, you should have a scale such that each pixel is about one-half the FWHM for your average seeing. For example, say the average seeing at your location is 4–5 arcseconds. In this case, 2 arcsecond pixels would be acceptable, as it would take about two pixels to cover the full image of the star. On the other hand, if your seeing is around 1–2 arcseconds, then you need to use much smaller pixels, i.e., 0.5–1 arcseconds. Using pixels that are too small is called “oversampling” and is less efficient since the light of the star is spread over a larger number of pixels. This increases noise and, therefore, decreases the signal-to-noise ratio (SNR). “Undersampling” is when the pixels are too large. In this case, you are not getting a good statistical profile of the star and so the accuracy of your photometry suffers. If anything you want to oversample the image slightly, i.e., have 2–3 pixels per FWHM. If your site has regular periods of better than average seeing, use the best seeing your site provides to determine the best pixel size. When the seeing deteriorates, you can go to 2x2 or even 3x3 binning to get a better pixel match. This allows you “the best of all worlds” and doesn’t keep you from taking advantage of those better nights. If those nights of exceptional seeing are not very common, then you’re probably better off working with the lower side of average when making your decision.
4.8
Bias Frames
In order to have a final image that represents data only from the target, stars and sky background, you must remove, among other sources, the noise in the image that is the result of the camera electronics. This is the purpose of the bias frame. In theory, you should take a zero-length exposure with the shutter closed to obtain a bias frame so there is no build up of noise from a longer exposure. However, the drivers for some CCD cameras do not allow zero-length exposures. If this is the case, take the shortest possible image. Actually, you should take several of these images and then average or median combine them to form a master bias frame. For the most part, bias frames are not required if you exposure your dark frames (see below) for the same time and temperature as your target frames, since the bias is included in the dark frame. Bias frames are more for when you use scaled darks, i.e., dark frames taken at different exposure time and/or temperature than the science image. Scaled darks are often used when the exposure for the raw images is on the order of tens of minutes. I won’t go into scaled darks at this point. The theory, creation, and use of bias frames are explained very thoroughly in some of the books listed in the Bibliography.
The Photometry Primer 37
4.9
Dark Frames
Dark frames are used to remove the noise within the CCD chip caused by randomly generated electrons due to the atoms in the material bumping into one another, i.e., “thermal noise”. Cooling the chip reduces thermal noise by making the atoms move slower and so fewer collisions result in free electrons. Dark frames are usually taken at the same temperature and duration as the target image. For example, if you take a 1-minute image at –30°, then you would use a dark frame that was also 1-minute long at –30°. The dark frame must also have the same number of rows and columns as the target images to which it will be applied, i.e. both images must have the same binning. To create a master dark frame, take several raw dark frames of the appropriate duration and temperature. Then either average or median combine them into a single image. Even if the shutter is closed, strong ambient light can creep into the camera. Don’t try to build your library of dark frames in the middle of the day when it may not be possible to cool the camera to the required temperature anyway. Wait until a cloudy night or a full moon rolls around. I’m sure you’ll get one or the other without having to wait too long. In this case, building a library of master darks for my usual one to three minute exposures is a good use of down time. Again, dark frames include the bias value. So, if you’re using darks that are of the same exposure and temperature as your target exposures, the bias subtracts out just as if you had created and used a master bias frame. If you’re using scaled darks, then you must create a separate master bias frame since the process of scaling the darks alters the relationship between the bias and thermal noises.
4.10
Flat Fields
Not all pixels are created equal. Some are more or less sensitive than the average pixel. The purpose of the flat field is to account for these variations. Flat fields are also used to correct problems in the optical path such as dust on a filter or the camera window as well as vignetting, all of which have the net effect of reducing light to one or more pixels. There are limits to how much a flat frame can compensate for light loss due to obstructions and vignetting. A flat frame cannot counter a totally opaque spot on the chip, as sometimes happens when dirt gets in the camera. Flats also cannot correct for effects from external sources such as scattered or reflected light (moon or streetlight) hitting the primary mirror or objective lens. What’s important to understand about flat fields is that they represent the response of the entire system, meaning the telescope and camera. Response can be measured on different scales, e.g., the overall sensitivity of the chip using a given filter down to dust on the glass cover of the chip affecting the sensitivity of a few pixels. If you’ve ever wondered what those faint “donuts” on your images were, they are shadows of dust particles in the system. The smaller the donut, the closer
38 The Photometry Primer
the dust particle is to the chip. It won’t do to have the target sitting in the donut hole and the comparison on the donut and not use a flat field to even things out. Your photometry software applies the flat field to the raw image by multiplying, or dividing, the value for each pixel in the image by the value for the same pixel in the flat field. This infers that the flat field must have the same number of rows and columns as the raw image. The values in the flat field are usually normalized before processing, meaning that pixels of given value are assigned a value of 1.00. Assuming that the flat is applied by multiplication of pixel values, less sensitive pixels would have values > 1 and those more sensitive would have values of < 1. All of this should be “under the hood” of your software. You don’t really need to know if the flat field is applied by multiplication or division, only that it’s done right. If the software documentation doesn’t specifically say how it works with flats (or darks), ask. Mostly likely the author will be glad to explain. When taking raw flat images, the system should be near the same focus as for when you take target images, usually infinity. If not, then some of the dust particles may produce different sized donuts than at infinity focus. Once you have a master flat, you shouldn’t move the camera or change anything about the system. If you do, you’ll have to repeat the process. The good news is that once you have a good master flat, you can usually use it for a few days if not weeks. Don’t go too long. Those dust particles do accumulate. Flat fields are one of the most difficult things about good photometry but without them you can’t do higher precision photometry. If you have an exceptionally good setup, you might be able to achieve 0.02–05 m precision without a flat field. Some asteroid lightcurves and variable star secondary minimums have amplitudes at or below that level. There are several methods for obtaining flat fields. Each has its own advantages and disadvantages. Most of them share the common concept of shooting a series of images of an evenly illuminated source. The individual images are dark and bias (optional) frame corrected and then merged by averaging or median combining them into a single master flat. 4.10.1
General Considerations
It’s important when you’re shooting the raw flats for a given filter that you keep the average (or median) of the pixel values approximately the same from image to image. This often means increasing the exposure when taking flats with filters or as the twilight sky changes. Try to get about 50% saturation for non anti-blooming chips and a little less for anti-blooming chips. Don’t go too low or you’ll have too much noise in the master flat. Remember that any noise in the master flat becomes part of the final image. If you are shooting flats with filters and using the twilight sky for illumination, shoot the darker filters first so that you have the most light. Since most CCDs have a shutter of some sort, you must be careful not to take flats with too short of an exposure. Otherwise, the CCD chip will not be evenly illuminated over its entire surface. For example, if you have a camera with a four-
The Photometry Primer 39
vane shutter that opens from the middle, expands out, and the collapses shut, the regions nearest the edge of the chip receive less exposure than the center of the chip. Use an exposure such that the time it takes the shutter to open becomes an insignificant fraction of the total time. A good start would be 5–10 seconds. Just like any image, flats have dark noise and so you need to take dark frames for your flats as well. There are some that disagree, but try taking a 10-second exposure with and without a dark and see what you think. The process of getting darks for flats is more complicated when you use different exposures for your flats as the twilight sky darkens or you insert filters. Remember that dark frames should be taken at about the same exposure and temperature as the image to which they’ll be applied. This is one time where bias frames and scaled darks might be effective. However, I’ve been able to avoid using them by working quickly and efficiently during twilight and – most important – having a pre-built library of short exposure darks taken at the temperature that I usually use. 4.10.2
Clean Filters Versus Flats
It would seem obvious that clean filters would make getting flats easier because there aren’t any dust particles on the filter to create “dust donuts.” However, the problem of clean filters extends beyond simply keeping dust off the filters. Fellow photometrist John Menke wrote about his troubles getting good flats. It turns out that his filters had developed a thin layer of “snowlike stuff”. The “snow” turns out to be residue due to a chemical process involving one of the layers of glass in the B and V filters. The net effect on photometry on photometry is significant and – most important – inconsistent. John was able to remove the layer using special cleaning fluids and a very soft cloth. This does not always work. An alternative is to use the fine cerium oxide polishing power provided in many mirror-making kits. Mix a little in water and, using a clean fingertip, gently rub the compound over the filter. Do this in small steps, washing with distilled water between each session, until the material is removed. Make sure you do not get water between the layers of the filter. If you do, plan on buying a new filter. The polishing technique should be used only on uncoated filters. The polishing can damage anti-reflection coatings. Filter manufacturers are aware of the problem and are working to produce filters with protective coatings. 4.10.3
Twilight Flats
Twilight flats are taken during, of course, twilight. The scope is aimed near the zenith and once the sky is dark enough to avoid saturating the chip, you start taking the raw flat field images. If filters are being used, this must be done for each filter. There are different theories as to where is the best place to aim your scope. Try near the zenith on the opposite side of the meridian from the sun. Usually the
40 The Photometry Primer
scope is moved slightly between images. This allows any stars in the images to be removed when the raw images are median-combined to make the master flat. The problem with this method is that the time between when the sky is dark enough to avoid saturation and when too many stars appear in the image can be very short – often only a few minutes. If you’re working with several filters, time is tighter still. Try putting a plain T-shirt or uniform piece of milk plastic, such as used for signs, over the face of the telescope so that no stars are imaged. This way you can shoot a little longer into twilight without having to worry about stars filling the field or moving the telescope between images. 4.10.4
Dome Flats
Dome flats are obtained by shooting an evenly illuminated portion of the inside of a dome or a large flat white card mounted on a wall. The card is not directly lit since it’s difficult to get even illumination. Instead, it’s illuminated by light bounced off another bright card or wall. This method is preferred by some because the intensity and color of the light can be controlled. Color does have a slight effect on flat fields. In many cases, however, the effect is small enough to ignore. A word of caution if you construct a white card or surface for dome flats. All white paint is not created equal and does not reflect all colors equally or even nearly so. There are some paints that while they appear white are actually quite red. This can make for very long exposures when shooting flats with B or U filters. 4.10.5
Light Boxes
The light box approach combines the two previous ideas. A box is constructed that fits in front of the system. Inside the box are dim lights that evenly illuminate a sheet of uniform diffuse plastic. You can find several examples by doing a Google search on the Internet. The AIP book by Berry and Burnell also has a good discussion about light boxes and details for building one. This idea works best for smaller scopes. It’s hard to hang a light box on the front of a 1-meter telescope. Again, the source of light is critical. Do not use white LEDs for your illumination source. They are highly deficient in some colors and excessive in others. Use standard tungsten bulbs that cover a full range of colors. 4.10.6
All-Sky Flats
Another technique used by some calls for shooting relatively blank areas of the sky during night – certainly no field with a very bright star – and then median combining the images. The stars cancel out, leaving only the sky background as seen by the system. I’ve used this method at times when I didn’t have time to get twilight flats at the start of the run. The results have been very good. They are much better than no flats at all and serve well when doing differential photometry where there are no plans to convert to standard magnitudes.
The Photometry Primer 41
4.11
Photometry Apertures and Annuluses
When you click on a star in many photometry programs, two or more concentric regions appear that are centered on a star (see Fig. 4.1). The program may use circles, squares, allow one or the other, and even custom shapes. For this discussion, let’s assume circles. The size of the seeing disk comes into play when choosing the size of these circles as you measure your images. The inner circle defines what I call the measuring aperture, or just aperture. The next region is the “dead zone” in which all pixels are ignored. This prevents stars near the target from being measured as well as using some pixels twice, once for the data and once for the sky background. The outermost region is the sky annulus or just annulus. Technically, annulus implies circles, but I’ll call it an annulus regardless of the actual shape. The pixels within the measuring aperture are used to calculate the total signal of the target. This is found by summing all the pixel values within the area, which means the sky background as well. The value for the target alone is found by summing the pixel values (actually, the number of electrons derived from the pixel values) within the aperture and subtracting the number of pixels within the aperture times the sky background value. For example, say you have 50 pixels within the measuring aperture and the total number of electrons is 48,000. Say also that the average sky background value was found to be 185. The actual value for the object being measured is then ActualValue = 48,000 – (50 * 185) = 38,750 Again, remember that when doing photometry, the software must convert pixel values to a number of electrons before an instrumental magnitude is computed. Unless noted otherwise, all the values mentioned here are in electrons, not the ADU values of the pixels. Recall also that the number of electrons is found by multiplying the total of the ADU values by the gain, which is the number of electrons per ADU (e–/ADU). Most software will “normalize” the actual value so that it is based on an exposure of 1-second or some other arbitrary value, i.e., NormalizedValue = ActualValue / ExposureTime
(4.10)
In the example case above, if the exposure were 60 seconds, then the normalized value would be NormalizedValue = 38,750 / 60 = 645.83
42 The Photometry Primer
Figure 4.1 Measuring a star. The object’s brightness and position are computed by using the sum of the values within the inner circle – the measuring aperture, less the average sky background times the number of pixels within the measuring aperture. The background is computed using the values in the region that’s outside the middle and inside the outer circles, i.e., the sky annulus.
Normalization is used so that values from images with different exposures can be compared directly. If a correction were not applied, then a star of the same magnitude would have a larger value in an image with a longer exposure than an image with a shorter exposure. This would prevent direct comparisons among different images. The sky background value is found by using a number of methods on the values within the sky annulus. In some cases, the method is as simple as taking the average or median value. However, the problem with such an approach is that it does not allow for pixels that are exceptionally dark (cold pixels) or bright (hot pixels). Another approach is to sort the pixel values by value, eliminate the top and bottom 20% (or some other value), and find the average of what remains. Entire papers have been written on this subject alone. Your software documentation should explain which method it uses and if you have any control over it. Again, if the documentation is lacking, then ask the developer how his software computes the sky background. 4.11.1
The Measuring Aperture Size
Recall that the profile of the star usually follows a steep bell-curve, often called a Gaussian curve. If you use too small a measuring aperture, then you lose some of the data on the descending branches of the curve. There are times when it’s permissible to lose a little of the profile but don’t take the idea to extremes thinking you’ll get better photometry. If your software reports the SNR of the data, try changing the size of the measuring aperture and noting how the SNR value changes. If you start with a large aperture and go smaller, the SNR will (usually) increase. At some point, when the aperture is smaller than the disc of the star, the value levels off or even goes down. Fig. 4.2 shows the affect on SNR when changing the aperture size. It’s important to note that plot was for a fairly bright star (note the high SNR). If you repeat the experiment using fainter stars, you’ll see that the optimum aperture width might be a little smaller for faint stars. Naturally, the fainter stars will have a smaller SNR value as well.
The Photometry Primer 43
Figure 4.2 The SNR of a star when measured using different-sized measuring apertures The FWHM of the star was 3.00 pixels. The vertical axis gives the SNR of the star. The horizontal axis is the diameter of the aperture in pixels.
Something that many people have a hard time appreciating is that the width of the profile is almost exactly the same regardless of how bright the star appears on the image. The only difference between a bright star and a faint one is the height of the curve. The brighter star appears larger because there are more pixels with values above the background level that you can see on the screen. The fainter star has fewer pixels with values above that level and so appears smaller. I point this out because the temptation may be to choose a measuring aperture based on how well a faint star fits within the aperture. If you then measure a brighter star, you may see that it spills over the edge of the aperture you selected. It has been shown (see Howell’s book) that almost all of a star’s data is contained within a circle that has a diameter of 3*FWHM. If your software measures the FWHM, then select an aperture that is 2–3 times the average FWHM value of several stars.
4.11.2
The Sky Annulus Size
You must also take into consideration the size of the sky annulus. Even if you have excellent dark frames and flat fields, you can’t assume a perfectly even background at all points on the image. In general, a larger sky annulus is better than a smaller one, with the usual caveat about not taking a rule of thumb to extremes. If you use too small an annulus, then you are relying on there being no abnormal data within that area such as a faint star or a cosmic ray. If there is such an abnormality, that artificially increases the background value, making the measured object fainter than it really is. If you increase the size of the annulus, you add more pixels, which usually reduces the influence of the abnormal pixels. How well the abnormality is handled depends on the algorithm used by the program.
44 The Photometry Primer
I use an annulus about the same size as the measuring aperture. So, if the measuring aperture is 11 pixels wide, I make the sky annulus at least that wide. However, I also take into consideration the general background as well. Sometimes I’ll use a larger annulus in order to include more pixels and so help remove the effect of one or two faint stars. A smaller annulus is used if there is a single, bright star just encroaching into the annulus region. In this case, using a larger annulus might eliminate too many pixels from the background calculations, producing a poor statistical sampling. One way to check your selection is to measure the star and note the SNR or raw instrumental magnitude, if reported. Keep the measuring aperture the same size but increase the size of the sky annulus. At some point, the SNR levels out (see Fig. 4.3). There is some interaction with the size of the measuring aperture, so you may have to experiment a bit. Eventually, you’re able to look at an image and have a good feel for the best measuring and annulus sizes. I recommend another test, which is more a check of the software than anything else. Find an image where you can measure a star such that a nearby field star is just outside a smaller sky annulus. Note the reported SNR or better yet, the instrumental magnitude if it’s available. Then, while keeping the measuring aperture the same size, increase the sky annulus so that the nearby field star is within the sky annulus. Measure the star’s SNR or instrumental magnitude again. There should be no difference, though 0.01 m or less might be acceptable. This allows you to determine how well the sky background algorithm does with field stars in the sky annulus.
Figure 4.3 The effect on the measured instrumental magnitude of a star (FWHM = 3 pixels, SNR = 450) when changing the area of the sky annulus but keeping the area of the measuring aperture the same. The vertical axis is the instrumental magnitude, while the horizontal axis is the area of the sky annulus in square pixels.
The Photometry Primer 45
4.11.3
The Shape of Things
Should the apertures be circular, square, or rectangular? You’ll find advocates for each. From a programmer’s viewpoint, the square aperture is much easier to handle, since only those pixels that are entirely within the rectangle/square are used. With a circular aperture, the circle can intersect pixels. In this case, the program must determine how much of the pixel lies within the boundary and use a proportional amount of the pixel value when summing all pixel values within the circle. A circular aperture fits into tighter spaces since they match the shape of the stars. Tests comparing the two show that a circular apertures give a slightly lower noise and scatter, probably due to a smaller sky contribution within the measuring aperture. You can also use a rectangular or elliptical aperture. This might be appropriate when measuring a fast-moving asteroid where its image is elongated and so a nonsymmetrical aperture fits that trailed profile better. When you measure a star, which is presumably round, you’ll have more sky background than you may like along one axis. Of course, you can track the asteroid, making it round and the stars elliptical. Some photometrists do this and then use different-sized apertures for the asteroid and stars (a larger one for the stars to include the streaked image). This requires some extra work to scale the values so that they are all as if the same sized aperture was used for the asteroid and stars. However you approach this problem, remember that you can never get a better SNR than when the stars and target are both round and you use a single, well-sized aperture. The best you can do is keep the drop in SNR to a minimum. A way around the issue of non-symmetrical apertures and low SNR is to take a set of shorter exposures where neither asteroid nor stars are streaked and then sum (stack) the images to get an improved SNR. The critical issue in this case is that the software provided a good value for the time and exposure of the image. The effective time is not necessarily the average of the times for each exposure. The effective exposure is the sum of the exposures and is needed to compute the instrumental magnitudes properly (see Eq. (4.10)). 4.11.4
Aperture or PSF Photometry
I won’t go into any detail about aperture versus PSF (point spread function) photometry. In some cases, PSF is better than aperture photometry, most noticeably in very crowded fields. However, even there it has its limitations. From a programmer’s standpoint, it’s much more difficult to implement. This topic makes good additional reading for those wanting to move to the advanced levels of photometry. For me, aperture photometry has worked fine for many years. I may not know what I’m missing in regard to PSF, but from what I can tell, the benefit of working it through wouldn’t warrant the time taken away from other things.
46 The Photometry Primer
4.12
Reporting Errors
The fundamentals and details of statistics and propagation of errors are beyond the scope of this manual. You’ll find a good introduction in the books by Henden– Kaitchuck and Howell (see the Bibliography). Your photometry software should allow you to find and, if appropriate, plot errors, e.g., error bars on a lightcurve. By reviewing errors, you can often spot outlier points that you can re-examine to see what might have caused them. In many cases, you can find a valid reason to remove the offending points. Doing so has often made the difference between finding the right lightcurve period and not. However, you should never delete data entirely. If you believe a data point is bad, remove it from the calculations, but don’t erase it so that you can’t ever use it again. There will be a temptation when working asteroid lightcurves of low amplitude to remove some seemingly data bad points. Do this with great care. Those points may be evidence of a satellite when often the “dip” in a curve is only 0.02– 0.03 m out of 0.1–0.15 m. Consider where we’d be if photometrists removed that “spurious” data dip of 0.01 m seen in a bright star but lasting for an hour or more. So much for finding extrasolar planets! When reporting your data, include your error estimates and maybe a brief explanation of how you derived those values. This can help those reading your report to determine the weight that should be given to your data versus those from others. It may also lead to unexpected results. In the case of a single observation, give the estimated error in magnitudes. For a lightcurve with tens if not hundreds of data points, then give the overall error for each set of points within the curve. For example, if the data is all from one night and observer, you need to give only one error estimate of the data. If you’ve combined data from several nights, then give an overall error and the errors and, if appropriate, the overall error for each night. Remember that when you’re measuring images, you’re sampling digital representations of analog data. There are any number of sources for errors. It’s nearly impossible to state any observation with absolute certainty – unless it’s to say that it’s not absolutely certain.
Chapter 5
Photometric Reductions Robert Frost wrote of following the path least taken. The methods for reducing magnitudes to standard values described in this chapter are going to take you down a less familiar path, but don’t worry. It’s not filled with any more potholes than the well-trod path, and, in fact, I believe it is smoother and has fewer curves and detours. In the end, you’ll arrive at the desired destination and, I hope, with more confidence in your ability to retrace the route on your own. The reduction methods are being discussed now so that you have a solid understanding of what’s required to get your observations on a standard system before you take the first image. After working more with neophyte and experienced photometrists, I’ve come to realize that it’s better to give a little theory up front, especially when it gives what’s done at the telescope a better defined purpose. The reduction methods in this section differ from the traditional steps in that the new methods rely much less on observations in two or more filters. Many people work only with Clear or V and so don’t want to bother with having to make periodic observations in multiple colors throughout an observing run. To work primarily in one filter, you still need to get images in two standard colors for the final reductions but this can be done just once sometime during the run or even on a separate night. In the coming sections I’ll show you how you can get to standard magnitudes with a minimum of time and effort. In order to save space and, more important, to avoid confusing the discussion, I will not include the material on the traditional reductions methods here. For those wanting to work with those traditional methods, previously described in the first edition of a Practical Guide and in even greater detail in the Henden–Kaitchuck book, I refer you to those. There are times when you might want to follow those more detailed methods, such as when working on a variable star as part of a collaborative effort. Even then, you could use a variation of what’s to follow. In many cases, it’s not required to put observations on a standard system. Many an asteroid’s rotation rate has been determined using simple instrumental magnitudes that were the difference between that measured for the asteroid and that of a comparison star or average of several comparisons. So why bother with reductions to a standard system at all? Sometimes you have to follow an asteroid over a long period of time because its period is long or it has a multiple almost exactly equal to that of the interval between observation runs. The latter means you’re covering almost the same part of the curve each time. Unless you’re able to capture a complete cycle with each run (or nearly so), it may take weeks to get enough data to cover a complete cycle. Interference from clouds or the moon may 47
48 Photometric Reductions
mean making observation runs days if not weeks apart. In these cases, being able to tie the data from one run to another can be very difficult at best. The solution is to put all the data onto a common magnitude system. In the coming sections, I’ll present the reduction methods in the order they should usually be used. That way you can see how each step uses the results from the steps that preceded it. At the end, I’ll cover an even simpler approach that can get you close to a standard system. It’s not as rigorous and has a lesser degree of accuracy. However, it may be enough in many cases.
5.1
The Different Path
Let’s take a look at the basic reduction formula for the V filter that you’ll often seen in photometry books.
V = v − k′X + Tv (CI) + ZPv where
V v k'V X TV CI ZPV
(5.1)
reduced standard V magnitude instrumental magnitude in V filter first-order extinction in V air mass at the time of the observation transformation for V filter standard color index of star (B–V, V–R, or V–I) nightly zero-point for V filter
Parallel formulae are used for the B, R, and I filters. This is one place where the current approach differs from the traditional. Usually the formulae for the other filters are expressed in terms of a color index; i.e., the traditional form finds a solution for (B–V) and not B filter alone. Also in the traditional approach, the term CI, for filters other than V, is expressed in instrumental magnitudes and not reduced standard values that the current approach will find and use. In Eq. (5.1), v – k'VX is often written as just vo, meaning the brightness of the star above the Earth’s atmosphere. Using the shorthand term, rearrange Eq.(5.1) into the form
V − v o − Tv (CI) = −k′v X + ZPv
(5.2)
Note that the left side of the equation is a constant value. From this, you can see that any change of k'V requires a change in the value of ZPV to keep the right side equal to the left for a given air mass, X. More important to realize is that the value of TV(CI) is independent of both k'V and ZPV. In other words, as you’ll see later, the value TV is the slope of a linear solution. The slope does not change because the values of k'V or ZP change. This means that you can ignore both first-order extinction and the nightly zero-point terms when finding the transforms for each filter.
Photometric Reductions 49
There is one very important restriction. If you intend to apply the transforms directly to raw instrumental magnitudes to find the standard magnitudes for a number of targets (absolute photometry), then the air mass of each target field and that of the reference field when you imaged it must be nearly the same. Another look at Eq. (5.2) shows you why. The value of ZPV is valid only for a given value of k'VX. On the assumption that k'V is constant through the night, then the value of ZPV is dependent on the value of X. You’ll see later that even the restriction of same air masses can be dropped, save for the most critical cases. Those familiar with the reduction process, may be asking, “What about second-order extinction?” This is a good question. For now, assume that second-order extinction terms can be ignored out of hand for V, R, and I. In absolute photometry, this may not be true for B and C filters but it is mostly true for the methods outlined in this book. Chapter 6 covers second-order extinction in more detail.
5.2
The Differential Formula
What follows is designed to provide data for the final reductions. Unlike the traditional approach, you will not be finding a standard color index, e.g., (B–V) or (V– R), from instrumental values and then applying that in a formula to find standard V magnitude. Instead, you will be finding the standard magnitude in one or more colors independently. If you want to determine the color index, simply do the necessary math. I have found this approach more straightforward and easier to apply. The goal is to use the differential formula for finding a standard magnitude,
∆M = (m o − m c ) + T * (CI o − CI c ) where
∆M mo mc T CIo CIc
(5.3)
differential magnitude in a standard color instrumental magnitude of target instrumental magnitude of comparison transform for given filter standard color index for target, e.g., (V–R) standard color index for comparison, e.g., (V–R)
You can see that it’s required to know the standard color index of the target and comparison beforehand. That is one of the steps of the coming process. Fortunately, you can determine these values just once using a few images taken at the beginning of a run and then concentrate on working the target field as you want. There will be no need to break the observing program to get additional images for finding the color index periodically. If you’re using more than one comparison star, compute the differential magnitude for each comparison separately and then apply the absolute magnitude of the comparison to find the standard magnitude of the target based on that one
50 Photometric Reductions
comparison. Use the average of the derived values to determine the final reduced value of the target and the standard deviation of the values. Mathematically,
M=(
N
∑ (M c + ∆M) j ) / N
(5.4)
j =1
where
M Mc ∆M N
reduced standard magnitude of target standard magnitude of comparison standard differential magnitude (object–comparison) number of comparisons used
The temptation may be to take the average of the differential instrumental values and then apply them to the average standard magnitude of the comparisons. This would work only if the comparisons were all exactly the same color. Remember that the differential value includes the difference in color between the target and comparison and so can be applied only to a star with the same color. I found this out the hard way, when I used the wrong approach and couldn’t get data to match better than ±0.03 m. Once I used the right approach, the data agreed to ±0.01 m. There is an important exception (isn’t there always?). Some variable stars change color during their cycle, e.g., Cepheids. In this case, you may want to get observations in multiple filters more frequently and then determine the range of values of color index against the period of the curve. You can then apply an appropriate color index based on when the observation was made within the lightcurve cycle.
5.3
Clear to Visual Conversions
First, and most important, it must be understood that it is very unlikely one can get a perfect match of an unfiltered system to the V. Don’t expect accuracy on the order of millimags. However, 0.01–0.02 m is not unreasonable. The pass band of your system when using a clear filter most likely does not match the peak or shape of the V pass band. The situation is not hopeless, especially if you use differential photometry and follow a simple rule when finding the standard magnitudes for the comparisons.
5.4
First-Order Extinctions – Are They Really Necessary?
Whether you need to take time to find first-order extinction values depends on your observing program. If you’re working a single field all night and that field is near a reference field that you can use to get your transforms, then you probably don’t need to determine first-order extinction values and zero-point values. That’s because the errors due to air mass difference are small enough to ignore. If, how-
Photometric Reductions 51
ever, the target and reference fields are separated by more than a few degrees, the k'X term can become significant when finding the standard magnitudes for the comparison stars and color index for the stars and target. Because the value of the transform is independent of k'X and ZP, you can use a value of 0.0 for k' when finding the transforms, i.e., assume that the raw instrumental magnitude is an exoatmospheric value. Later on, you’ll see this makes finding the true first-order extinction much easier. However, be aware that the value for ZP that’s found is valid only for that assumed extinction of 0. You can reduce the errors by using an assumed value that is closer to reality, one that’s based on previous determinations or experience. Using this value to find transforms and associated zero-points introduces some errors but they’ll be considerably smaller than with k' = 0. Table 5.1 gives you an idea of those errors. Errors for k' = 0.25 z 0 5 10 15 20 25 30 35 40 45 50 55 60
X 1.000 1.004 1.015 1.035 1.064 1.103 1.155 1.221 1.305 1.414 1.556 1.743 2.000
Assumed 0 0.001 0.003 0.005 0.007 0.010 0.013 0.017 0.021 0.027 0.035 0.047 0.064 0.092
k’ 0.2 0.000 0.001 0.001 0.001 0.002 0.003 0.003 0.004 0.005 0.007 0.009 0.013 0.018
Errors for k' = 0.30 0 0.001 0.003 0.006 0.009 0.012 0.015 0.020 0.025 0.033 0.042 0.056 0.077 0.110
Assumed k’ 0.2 .25 0.000 0.000 0.001 0.001 0.002 0.001 0.003 0.001 0.004 0.002 0.005 0.003 0.007 0.003 0.008 0.004 0.011 0.005 0.014 0.007 0.019 0.009 0.026 0.013 0.037 0.018
Table 5.1 Errors, in magnitudes, when using an assumed value for first-order extinction vs. the actual value. If 0.01 m is considered the “critical point”, then an assumed value must be within 0.05 m/air mass if reductions are to be accurate when working near the 30° altitude cutoff.
Column 1 is the zenith distance. Column 2 gives X, the air mass, using the basic formula, X = sec(z). A value of z = 0 means the higher field is directly overhead and z = 60 means the field is 30° above the horizon. You can go lower but things quickly fall apart when you do. The errors were calculated on the assumption that the two fields are separated by five degrees in declination, have the same RA, and are on the meridian. For other combinations, the RA and declination must be such that the altitudes of the two fields differ by five degrees. The true value of k' is given in the heading. Columns 3 and 4 give the errors for assumed values of k' = 0 and k' = 0.2 when the actual value is k' = 0.25. Columns 5–7 give the errors for assumed values of k' = 0, 0.2, and 0.25 when the actual value is k' = 0.30.
52 Photometric Reductions
If the maximum acceptable error is 0.01 m, a rather tight tolerance, you can see that assuming a value of k' = 0.0 quickly leads to that maximum error. When the true k' = 0.25, then using k' = 0 results in an error of 0.01m when the higher field is 70° above the horizon, i.e., almost overhead. If you use a value of k' that is 0.05 m from the true value, then you can see that an error of 0.01 m is not reached until when the higher field is about 45° above the horizon, or about halfway from horizon to zenith. For a latitude of +35°, this corresponds to a declination of –10°. As you can see, you’ll need to get first-order extinction values (or use good estimates) if your reference field is not reasonably close to the target field. This is often the case if you want to use Landolt fields, which are confined to near the equator. The good news is that if you’re doing differential photometry, you need the first-order extinction values only to determine the standard magnitudes of the comparison stars and color index of the stars and target. Once those values are found, then the concerns about first-order extinction and zero-points when reducing the target’s raw magnitudes go away completely. If you don’t know the standard CI values beforehand, usually the case when working asteroids, then you need a minimum number of images in two standard filters. You can often get these images early in the run and then start your concentrated timeseries or other work as quickly as possible.
5.5
The Same Color Index
Throughout the reduction routines, you’ll need to select a specific color index, e.g., (B–V), (V–R), or (V–I), and then use it throughout the reductions process. For example, if when finding the transforms you use the (V–R) color index, then you must use the (V–R) index when finding first-order extinction, the standard color index, and magnitude for each comparison star, and when finding the standard magnitude of the target. You cannot mix and match. This may mean that you have to go through the reduction process from start to finish more than once, the mostly likely case being if you want to use (B–V) for B observations. Whichever color index you select, there must be catalog values in the corresponding filters available for the reference field you’ll be using. You can’t use the (V–R) color index if there are no R magnitudes for the reference field. I use the (V–R) color index for the bulk of my work. This is also a bit off the beaten path, (B–V) being more common. (V–R) is shunned because there is not much difference in the two pass bands. This is why you see (B–V) and (V–I) used more often. Despite all this, I use (V–R) because R presents fewer problems when imaging with thinned, back-illuminated CCD chips. These chips often show a “fringing” pattern due a combination of their thinned nature and certain emission lines seen in the night sky spectrum. Flat fields don’t always remove them because the fringing is independent of the telescope and partly so of the camera.
Photometric Reductions 53
5.6
Transforms First
Finding transforms before extinction is another break with tradition. There are a couple of reasons for this approach. The first has already been discussed, i.e., when using a differential formula to find standard magnitudes you can safely ignore first-order extinction and nightly zero-point determinations after finding the standard magnitudes of color indices of the comparison stars you’ll be using. The second reason concerns the approach I recommend to find first-order extinction. I call it the “modified Hardie” method, which is based on a method developed by Hardie and discussed in Astronomical Techniques (W.A. Hiltner, ed). I’ll cover the details later, but for now, suffice it to say that you need to be able to get your raw instrumental magnitudes approximately on the standard system for the method to work. You can’t do that unless you know the transforms of your system beforehand. Photometry is often a Catch-22. The general idea behind finding the transforms for your system is to image a field with accurately known standard magnitudes. The Landolt fields are the first choice to get on the true standard system. However, being near the equator, they may be too far from your target field and so finding first-order extinction becomes more important. If you’re satisfied with “very close” and looking to save time, you can use one of the Henden fields listed in the appendices. These fields have been carefully chosen to include stars that have a good range of colors, errors in V of 0.02 m or less, and include R magnitudes. Since telescope time can be precious, it’s helpful to remember that you do not need to find transform values every time you observe. Unless something changes in your system such as a different camera, you cleaned or changed filters, or you had the optics serviced (re-aluminized the mirrors), the transforms change very slowly over time, if at all. Usually it’s sufficient to spend time on a couple of nights two to four times a year to get new transforms. Do not try to find transforms under less than ideal conditions. Smoke or haze might artificially redden stars. Thin clouds or haze may result in non-uniform extinction across your field. When in doubt, wait until a better time or night.
5.7
Finding Transforms
The steps for finding the transforms in a given filter are relatively easy. If you’re going to find first-order extinction terms using the modified Hardie method as well, you can use the images of the reference field for both sets of reductions. Never waste time at the telescope needlessly. 5.7.1
Image a Standard Field
Select a Landolt or Henden field that is well above the horizon. If possible, image this field early in the run that you can concentrate on the target field.
54 Photometric Reductions
Shoot the field at least two to three times in each filter that you plan to use for studying the target field. This allows you to average several readings and so reduce the noise in the solution. Remember that errors, in magnitudes, are about 1.0857/SNR, e.g., an SNR of 50 is about 0.02 m error. Use an exposure that is long enough to get an SNR of 75–100 for the faintest stars that you’ll measure but not too long; otherwise brighter stars will be saturated or out of the linear portion of the camera’s response. Remember that regardless of which filter(s) you plan to use to study the target field, you must get images through the filters corresponding to the color index that you’ll be using in the reductions steps. For example, say you’re going to use the C filter for most observations and will convert them to the V standard system. Say also that you’re going to use the (V–R) color index for the reduction steps. In this case, you need to get images of the reference field in C, V, and R. Finally, don’t forget that there must be catalog values for the colors that you’re going to use for the color index. You cannot do transforms using the (V–R) color index if there are no V and/or R magnitudes available. 5.7.2
Measure the Images
Using your photometry software, measure the raw instrumental magnitudes for each star in the field for which there is a catalog entry. Avoid overexposed stars as well as those with too low of SNR in the “faintest” filter. For example, if a star has a SNR of about 150 in V but only 25 in B and using (B–V), avoid using that star. This problem can be sidestepped by adjusting the exposure times for each filter so that you get a good SNR for the stars in each filter. However, do not use different exposure times unless you are certain that your photometry software normalizes measurements to a common exposure time when finding the instrumental magnitude. Most of the popular photometry programs do normalize exposures. If you’re not certain, check the documentation or ask the vendor. 5.7.3
Plot the Data and Find the Transforms
Find the transforms for each filter, using the general formula
M – m = Tf * (CI) + [ZP − k ′X ] where
M m Tf CI ZP
catalog standard magnitude for the filter instrumental magnitude transform value for the given filter color index, i.e., (B–V), (V–R), or (V–I) zero-point offset for that filter
(5.5)
Photometric Reductions 55
Note that the value of the instrumental magnitude can either be the raw instrumental magnitude for the star or the magnitude adjusted for extinction. If you used an assumed value for extinction, then recall that the resulting zero-point (ZP) is good only for the same air mass, X, at which you took the reference field images. When using the raw magnitudes, the assumed value of k' is 0. Plot the data for each star using the catalog (standard) color index value for the X-axis and the difference between the catalog (standard) and instrumental magnitude for the Y-axis. Fig. 5.1 shows such a plot created by MPO PhotoRed, a photometric reductions program. The appendices give an example for finding transforms using a spreadsheet. For the B, V, R, and I filters, the slope of the line is generally negative and relatively near 0 (the TV value is –0.132 in Fig. 5.1). Depending on your system, the C filter slope may be negative or positive, but it should also be near 0. Make sure the program or spreadsheet computes the error in the value or otherwise shows the accuracy and/or precision of the result. In the example, the standard deviation of the linear regression is about 0.02m. It’s worth repeating that you should select a field that allows you to measures stars with a large range of color index values. In Fig. 5.1, the (V–R) range is about 0.3–0.95. You can see why this is important if you limit the plot to values of 0.3– 0.55. In that case, you’d have a hard time finding a confident solution because of the scatter of the data. The extra data points toward the right (redder stars) made all the difference. The same data was reduced using the (B–V) color index. The values were different, as might be expected, but the results still followed the general trend of negative, nearly zero slopes, save for C, which had a positive slope but still near 0.
Figure 5.1 The V transforms as plotted and found by MPO PhotoRed. The data should always be plotted so that outliers can be easily spotted. Do not rely on a numerical result alone.
56 Photometric Reductions
5.8
The Hidden Transforms
The steps above found the relationship with a given filter using the measured magnitude, the catalog magnitude, and a known color index. This is fine if your target happens to lie in a field where the comparisons have well-known values, which is often the case for variable star observers working program stars. It is rarely the case for those who work asteroids or a new variable, both likely to be in uncalibrated fields. However, once you can get the standard color index and magnitude for each comparison, you can then use the transforms found via the procedure above for the final reductions of the target. Even better, you can use the same data that you created for finding the transforms to get around the problem of working with unknown comparisons. Mathematically, you want to equate a given instrumental color index to a known color index. CI = (ci – k'X) + ZP where
CI ci ZP
(5.6)
standard color index for given star instrumental color index zero-point offset
Fig. 5.2 plots data from the previous section to show how the instrumental magnitudes are related to the catalog values. The linear regression solution and correlation are included as well. In a perfect system, the slope of the line would be exactly 1.000. Be sure that you use the proper values on each axis. The goal is to be able to find a standard color index, CI, by knowing an instrumental color index. To do this, you must use the instrumental values for the X-axis.
Figure 5.2 A plot of (V–R) versus v–r showing how the instrumental values relate to the catalog values in a reference field. While there is a small color dependency in this system, it can be well defined with a simple linear function.
Photometric Reductions 57
Again, you can see the need for measuring stars with a good range of colors. While the data covering 0.3 < (v–r) < 0.60 shows a tight correlation, the additional data points using redder stars gives you more confidence that the solution is linear and strongly correlated. Based on this solution, to find the true (V–R) of a star you’d use the formula (V–R) = 0.978(v–r) + 0.017 Be sure to compute and save these results if you’re doing reductions with a spreadsheet or a program that does not use the approach being outlined in this chapter. You will need the slope and intercept values when you compute the color index and standard magnitudes of the comparisons. MPO PhotoRed automatically computes, saves, and uses the results to find the color index as part of this reduction process as well as generate a plot of the solution. The plot above came from the spreadsheet example you’ll find in the appendices.
5.9
First-Order Extinction
To maintain the highest possible accuracy in your results for the standard colors of the comparisons stars and color index of the stars and target, you should determine first-order extinction terms and the associated nightly zero-points. You should also get them every night and apply them only to the data from that night. Remember that a change in value of k' also means a change in the nightly zero-point (ZP). That’s why you must find both or none. Getting the necessary images for the recommended modified Hardie method involves only a few extra minutes’ time and, if you’re also finding transforms, means you can use the data from that process as part of the solution for the first-order extinction determination. The next two sections outline the common methods for finding first-order extinction. The first is the modified Hardie approach and is the one I recommend because of its efficiency and usefulness. The second method has been around for some time and involves using the instrumental magnitudes of a comparison star within the target field when taken over a large range of air mass values. If you live in an area with highly stable conditions and know that the sky conditions remained constant throughout your run, this method is perfectly acceptable. The important thing to remember when using first-order extinction values is that the sky conditions cannot change from the time you took the images for finding the values to the time of the image where you apply the extinction terms. If you are trying absolute photometry, you must monitor conditions closely; otherwise, you will run into complications. This is why differential photometry is the much-preferred approach. The relative differences between the comparisons and target can be determined with a much higher precision and accuracy than trying to find the absolute values of comparisons and targets, especially under less than perfect conditions.
58 Photometric Reductions
On those nights when conditions allow absolute photometry, even if for only a couple of hours, you can return to the target field or fields to determine the comparison star values to a higher degree of accuracy. Then it’s a simple matter to apply the previously found differential values to get better-quality standard values for the target(s). Of course, if conditions are good enough at some point during your run each night to allow finding acceptable absolute values for the comparisons, then do so and save yourself the effort of having to go back on a different night. 5.9.1
Modified Hardie
Hardie outlined his technique for finding nightly extinction values in his chapter in Astronomical Techniques (W.A. Hiltner, ed.). The book is out of print, but you may be able to find it in some used/rare bookstores. It’s pretty outdated overall, but there’s always something to be learned from the masters. Hardie realized that time at the telescope was valuable. One could spend a good part of the night taking images just to get the extinction values needed to transform instrumental magnitudes into a standard band. So he developed a plan that required a minimal amount of time. With practice and some advance planning, you can get the images you need in only 15–20 minutes. The original Hardie routine relies on the fact that if two stars of different magnitudes in the same color band are observed at different air masses, then – were it not for extinction – the difference between the instrumental magnitudes would be identical to the difference in the standard magnitudes. For example, assume that Star1 is in a field with an air mass of 1.0 and Star2 is a star in a field with air mass 2.0 and that the catalog values are Star1 = 14.000 and Star2 = 13.000. Then without extinction, the difference between the instrumental magnitudes should be exactly 1.000. In reality, the difference is a bit larger because Star2 is fainter than it should be because it’s seen through a longer path through the atmosphere than is Star1. The above method works only if the instrumental magnitudes are on the standard system. This is not generally the case, even if you take images through filters. The C filter is particularly problematic because of its much broader band pass. Now you know why you find the transforms first. Even if only approximately correct, when applied to the raw instrumental magnitudes, the values used in the solution are a much closer to the standard system. Without applying the transforms, the results from the Hardie method can be wildly scattered and not very close to reality. The original Hardie approach outlined in the first edition of this book suffered badly because of this very issue.
Photometric Reductions 59
5.10
Finding First-Order Extinction (Modified Hardie)
As with the steps for finding the transforms, the steps for finding first-order extinction via the modified Hardie method are straightforward and won’t take too much time from your observing run. I generally shoot the necessary images at the start of the night or as soon as conditions allow. If I can’t get them immediately, then I start on the target field. If at all possible, never let photons hit the ground. 5.10.1
Image a Reference Field Well Above the Horizon
If you’re also finding transforms, you can use the images from the field that you shot for that purpose in this method. If you’re not finding transforms, then locate a Landolt or Henden field near the meridian or at a high altitude. Northern observers can often find a Henden field that is almost at the zenith. The main consideration is that this field and the second one used in the process have a large difference in air mass values, i.e., at least 0.5. Shoot two to three images of the field in each filter that you’ll be working the target field. This allows you to average readings, reducing noise in the data. If those filters do not include those for the color index you’re using for the reductions process, then you need to get images in those colors as well. For example, if you’re working the target field in Clear and using the (V–R) color index for the reductions, then you need to take images in C, V, and R. Use exposures for each filter that allow you to measure a good range of stars in both magnitude and color with sufficient SNR. Stars that are too faint or saturated make for bad results. 5.10.2
Image a Reference Field Near the Horizon
This field should be around 30° high, i.e., with an air mass near 2.0. Don’t go below this limit as things become unpredictable with high air masses and, at the extreme, you can get different results using sets of similar stars, one near the top of the frame and one near the bottom. Follow the same guidelines as in step 5.10.1. 5.10.3
Measure the Images
Measure each field separately but in a way so that you can eventually combine the data. If you used different exposure times for different filters, make sure your photometry software normalizes the measurement before reporting an instrumental magnitude. 5.10.4
Reduce the Data
To reduce the data, you must find the average instrumental magnitude for each star in each field and then apply the transform. In general terms
60 Photometric Reductions
M r = M f – M i – Tf * (CI) where
Mr Mf Mi Tf CI
(5.7)
Reduced magnitude of star Catalog standard magnitude of star measured instrumental magnitude transform for filter used standard color index of star, e.g., (V–R)
To repeat, if, for example, you used (V–R) to find the transforms, then you must use (V–R) values for CI. To understand the modified Hardie method in practice, take a moment to review the above formula. Assuming a perfect world and imaging system and that there is no such thing as extinction, then the difference (Mf – Mi) is a constant for all stars of exactly the same color. A star that is exactly one magnitude brighter in the catalog should result in an instrumental magnitude that is exactly one magnitude brighter with your system. The term Tf (CI), removes most if not all of the color dependency, effectively making every star identical in color. Therefore, the left-hand side of the equation will have the exact same value for any star that you measure. The reason that the Hardie method (modified or not) works is that the value of Mi is not the same for two identical stars with different air masses. The star that has the higher air mass (is nearer the horizon), will have a slightly fainter instrumental magnitude due to extinction. Therefore, the (Mf – Mi) difference is not the same for the two fields and, as a result, neither is the constant value of the lefthand side of the equation. 5.10.5
Plot the Data and Determine the Extinction
For each star, determine the reduced value using the equation above. Then plot the data, using the average air mass of the field for the X-axis and the reduced magnitude for the Y-axis. This allows a visual confirmation using the formula
k′ = (Avg1 – Avg 2 ) / (X1 – X 2 ) where
Avg1 X1 Avg2 X2
(5.8)
average of reduced values for lower field average air mass of lower field average of reduced values for higher field average air mass of higher field
Fig. 5.3 shows the results of using this method with two Henden fields. The scatter in the data is partly due to using non-standard stars (the errors are up to 0.02m in V) and a less than perfect match to the standard system.
Photometric Reductions 61
Figure 5.3 The results of the modified Hardie method from MPO PhotoRed. The first-order extinction is the slope of the line joining the average value of the two sets of data. The plotted magnitudes are negative numbers (bright objects have more negative values), so the value of k' is always positive.
5.10.6
Retracing Steps
If you used assumed values for finding the transforms, you could go back at this point, armed with actual values for first-order extinction, and do the transforms calculations again. The changes should be very minor. Of course, any change there might result in a small change in extinction should you rerun the modified Hardie reductions. Those changes are likely to be so small that it is not worth the effort.
5.11
A Variation on the Modified Hardie method
A similar approach to the method described above uses different stars and includes more data points between the two extremes of air mass. Richard Miles developed this method and presented it at the Society for Astronomical Science Telescope Symposium in 2005. You can find his paper in the proceedings, which are available on-line at the SAS web site, http://www.socastrosci.org. Richard uses stars from the Hipparcos catalog, which have very accurate V magnitudes. You’ll find a list of blue–red stars from the catalog in the appendices. These can be used in lieu of the Henden fields. Given the large number of pairs, it should be easy to find five or six pairs that you can quickly image, making sure to cover a wide range of air mass values. Use the same approach for reduction as described above, with the difference that you use the actual air mass value for each star (or average of the pair) for the X-axis. This way, you should have five or six data points that line on a line whose slope is the extinction for the given filter. These same stars can be used for finding second-order extinction, sometimes necessary for B and C filter observations.
62 Photometric Reductions
5.12
Finding First-Order Extinction (Comp Star)
When you shoot the same field all night, you have a ready set of data for determining the nightly extinction values. As the field rises and then sets, the air mass is constantly changing. It’s largest when nearer the horizon and smallest when the field is at its maximum altitude above the horizon. 5.12.1
Image the Target Field
Take images in the filters you’ll be using to study the target and for which you want to determine extinction. Select one or more stars that are similar in color to the target that you’ll measure in each image. You want to avoid using comparisons that are too bright or too faint in order to avoid linearity or excessive noise issues. 5.12.2
Measure the Images
Measure the raw instrumental magnitude of the star or stars in each image. Be sure to record the air mass for each image as well. The imaging and/or measuring software should do this for you automatically. 5.12.3
Plot the Data and Find the Extinction Values
Plot the data using the air mass for the X-axis and the raw instrumental magnitudes for the Y-axis. The first-order extinction is the slope of a linear regression of the data points. If you’re observing through more than one filter, you must perform this method for images in each filter. Fig. 5.4 shows a typical example of a comparison star extinction plot. Actually, Fig. 5.4 does not show a very good solution. As I’ve said several times, never trust a computer. Relying on the results of a linear regression alone without a data plot can lead to disaster. The slope of the regression line is 0.744. That may be valid for the given data, but that’s an excessively large value for V extinction. Of course, there may have been a volcanic eruption, deep haze, or some other factor that leads to the result. However, the plot gives a more likely answer. Look at the range of values on both axes. You can’t control the range of magnitudes; they are a function of the air mass and star. You can control the range of air mass values. In this case, the combination of a small range of magnitudes and a small range of air mass values gave an artificially high extinction value. A linear regression solution is very sensitive to the range of data on both axes. Fig. 5.5 shows a better use of comparison star data, though the data is still rather sparse at higher air mass values. This is going to be natural. When following a target most of the night, the majority of observations is going to be at lower air mass values. However, it’s important to get at least some data points at higher values. Otherwise, you could end up with unrealistic results.
Photometric Reductions 63
Figure 5.4 Finding first-order extinction. The plot shows instrumental magnitudes of a selected comparison star over various air masses. In this case, the range of air mass values was too small to get a good estimate of the extinction.
Figure 5.5 A screen shot from MPO PhotoRed showing the data from following a comparison star throughout the night to obtain the first-order extinction value.
5.13
Comparison and Target Standard Color Index Values
The traditional approach to reductions calls for finding the color index for the comparisons and target on a standard system for (B–V), (V–R), and (V–I) using instrumental magnitudes in those same colors. The results are then used for the final reduction to standard V magnitude. In the strictest sense, this means having images in the appropriate filter for every observation to be reduced to V magnitudes. You could use an averaged value but the differential formula that will be used for final reduction to a single color depends on the standard and not instrumental color index. So, the next step is to find the standard color indices for the comparisons and targets.
64 Photometric Reductions
5.14
Find the Color Indices of the Comparisons and Target
Simplicity and efficient use of telescope time being one of the tenets of this overall approach, you’ll find that once again you can get the necessary images for this and the next step of finding the standard magnitudes of the comparisons with only a few images. When you take the images depends on how you handled first-order extinction and nightly zero-point values. 5.14.1
Image the Target Field
Take two or three images in the filters in which you’ll be doing the majority of your work. If these do not include the filters for the color index you used for finding the transforms and first-order extinction, then take images in those fields as well. For example, if the bulk of your work is in C and you used the (V–R) index for transforms, you need images in V and R as well as C. As with the previous steps, use exposures that allow you to get a good SNR for the target and comparison stars. This is where things can get a little tricky. If you’re working a faint target, you will have to chose comparisons carefully so that they will not be saturated or get to the non-linear level on the CCD. On the other hand, you don’t want to use comparisons that are too faint so that you can minimize the noise in the data. Using comparisons with higher SNR values reduces random errors and so provides a better base line for the differential values. If you assumed the first-order extinction values and nightly zero-points when doing the transforms, then you need to shoot the target field when it is at about the same air mass as when you shot the images for the transforms. When taking this approach, I usually shoot a reference field that’s close to the target field at the beginning of the run and immediately move to the target field. This often means that the two fields are not very high, just at or above the altitude limit of 30° but at least the air mass values are similar. If you are still planning to use assumed extinction values, then a better approach would be to wait until the two fields are higher. The problem with this, when doing automated observing, is that there would be the assumption that when the script went to take the images the sky conditions would be acceptable. You could put periodic breaks in the script to get the reduction images but then that takes time away from the target. Naturally, it would be during one of those breaks away from the target field that the star or asteroid would do something interesting. Ultimately, the best solution is to get a second reference that is near the zenith (or at least meridian) and determine the actual first-order extinction and nightly zero-points. This takes little additional time at the telescope and should be done about the same as when the lower reference field was shot. Furthermore, if your software or spreadsheet is set to handle the modified Hardie method, the extra time required at the computer is relatively short. To paraphrase Shakespeare, “Sometimes shortcuts make for not so good photometry.”
Photometric Reductions 65
5.14.2
Measure the Images
Use your photometry software to measure the raw instrumental magnitudes in all colors of the target field for the comparisons and target. These must be exactly the same comparisons that you used or will use for the differential photometry. This means not only the same set of stars but also the same order of assignment. For example, Comparison1 when doing the reductions must be the same star as Comparison1 when doing the differential photometry. You also need to record the average air mass for each set of values for a given filter. Finally, you need the “hidden transforms” discussed on page 56. All of this is handled automatically by MPO Canopus and MPO PhotoRed. 5.14.3
Plot the Data and Compute the Comparison/Target Color Indices
The general formula for computing the standard color index for a comparison is CI = (((m1 – k'1X) – (m2 – k'2X)) * TCI) + ZPCI where
CI m1 k'1 X m2 k'2 TCI ZPCI
(5.9)
standard color index for (B–V), or (V–R), or (V–I) instrumental magnitude of first color of index first-order extinction for first color averaged air mass of target field instrumental magnitude of second color of index first-order extinction for second color hidden transform slope hidden transform intercept
For the (B–V) index, m1 is the B magnitude and m2 is the V magnitude. For (V–R) and (V–I), m1 is the V magnitude and m2 is either the R or I magnitude. During the calculations, do not average the values in each filter. Instead, get three separate values for each star. The reason is so that you can then find the mean and standard deviation of the derived color index. If you averaged the three observations to create a single observation, then you could not compute the standard deviation, unless you computed the error for each observation and then get into the propagation of errors. If you don’t know what that means, then don’t average the values. If you do, then work the process as you like. The MPO PhotoRed screen shot in Figure 5.6 shows the results of measuring three images in V and R for a set of five comparisons and the target. The average and standard deviation values for the comparisons and target, shown in the title section above the plot, were found by taking the average of the three derived values for each object and finding the standard deviation of the mean. You’ll find an example using a spreadsheet in the appendices.
66 Photometric Reductions
Figure 5.6 The color indices for a set of comparisons and target found by MPO PhotoRed.
You can try to confirm the absolute accuracy of the values you find by consulting one of the large on-line stellar databases such as the Sloan Digitial Sky Survey or 2MASS. Unfortunately, those surveys do not use the Johnson–Cousins filters, so you’ll have to convert their values to the J-C system. The derived results should agree with what you found to within a few hundredths of a magnitude. Be sure to save these results and a guide so that you remember which star you used for Comparison 1, Comparison 2, etc., e.g., a screen shot of one of the target field images. Consider measuring the target field images taken for time-series work first so that you can select the comparison stars in advance and confirm that none of them is variable before you go about reducing the images taken for this part of the process.
5.15
The Comparison Star Standard Magnitudes
It’s a quick jump from finding the standard color index values of the comparisons to finding their standard magnitudes. The reason this step is separated from finding the color index values is to allow using previously established color index and standard magnitude values for the comparison stars and, if available, the color index of the target. If you’re working a field where the comparison star color index and standard magnitudes are known beforehand, you can skip the steps related to finding those values. If the values become available after you’ve started your campaign, then you can use those new values instead. Of course, you should then go back and recompute the target standard magnitudes based on the new values.
Photometric Reductions 67
5.16
Finding the Comparison Star Standard Magnitudes
In this step, you use the same initial data you created when finding the color index of the comparison stars. The difference is how you work with it. 5.16.1
Plot the Data and Determine the Comparison Standard Magnitudes
The general formula for converting instrumental to standard magnitudes is
M = m – X(k ′ + k ′′CI) + (Tm * CI) + ZP where
M m k' k″ X Tm CI ZP
(5.10)
converted standard magnitude raw instrumental magnitude first-order extinction for single color second-order extinction for single color air mass transform for the single color (not color index) standard color index of the object zero-point offset
For each comparison star in each filter, compute the value of M using Eq. (5.10). If you took three images, you’ll get three values of M for a given comparison/filter combination. Now find the average and standard deviation of the data points for the final value of the standard magnitude of the comparison in the given filter.
Figure 5.7 Finding the standard magnitudes of a set of comparison stars using MPO PhotoRed.
68 Photometric Reductions
The screen shot in Fig. 5.7 is from MPO PhotoRed. Three images were taken in each filter and so three sets of data points are seen. The title contains the derived standard magnitude and standard deviation for the five comparisons. Note that the standard deviations are less than 0.01 m. The appendices contain an example using a spreadsheet. Remember that the first-order extinction term and nightly zero-point are intimately related. A change in the extinction causes a change in the zero-point value. If you used 0 or other assumed value for first-order extinction in previous steps, then you must use the zero-point value that was found or assumed in those previous steps. If you determine the first-order extinction for each night, then you must also compute and use the zero-point value that is valid for the new first-order extinction value. Actually, the comparisons and target in the steps so far were stars in M67 and have well-known color index and magnitude values. I did this so that I could compare my results against the catalog values. The derived values for the comparisons were generally ±0.02 m of the catalog values. For your first attempts at running through this process, I recommend using a standard field – other than the one you used to find the transforms – for the target field. This way you can compare your results to the catalog values and, if your results don’t agree, be able to find out why. Once you can regularly reproduce catalog values to within 0.02 m or better, you can move to target fields with unknown comparisons with greater confidence.
5.17
Target Standard Magnitudes
You’ve arrived at the step where you can convert your time-series observations to standard magnitudes. Before you move on, review the discussion on the differential formula that is used for this step (see page 49) so that you are comfortable with how and why you’ll be handling the data. The care that you’ve exercised in the previous steps will determine the quality of the results. The reduction process is like any other where the final result depends on the results from previous steps. A mistake anywhere along the line affects the end result. If all the previous steps were done correctly, then you should have confidence in the results that follow. However, do not assume anything. Never trust a computer, especially when doing reductions at 3 A.M.
5.18
Finding the Standard Magnitudes of the Target
It may seem a little late to talk about getting the target field images but those used so far were of the reference field and limited in number and purpose. This being a book on lightcurves, the premise is that you have or will have a number of images taken during a night (a time-series) that you want to reduce to standard magnitudes and then analyze at least for the period and amplitude. It’s time to talk about those images.
Photometric Reductions 69
5.18.1
Image the Target Field With the Filters of Interest
The filters of interest may include those that you used to find color index and standard magnitudes or they may not. To keep things simple for the following discussion, the assumption will be that your time-series images were taken with the Clear filter with the intent of converting to standard V magnitudes. What follows could be easily applied to any number and combination of filters. You would handle each one as if it was the only one and not worry about the others. I’ll also assume that you are observing an asteroid and that you start shooting in the eastern half of the sky as soon as dark sets in and continue until the field is either 30° high in the west or twilight begins. In other words, you’ll have images taken over several hours. I’ll also assume that you got the reference and target field images for transforms, color index, and standard magnitudes at the beginning of the run. It may seem obvious to say, but once you can start imaging, do so. Use exposures that provide a sufficient SNR for the target. Don’t worry too much about the comparisons, save that you do want at least two that are not saturated or out of the linear region of the chip. As long as you obey those guidelines, it’s OK if the comparisons are a magnitude or even more brighter than the target. If you’re working an 18th magnitude object with an SNR of only 10 to 20 (I’ve done it with very usable results), there’s no reason to use comparisons that faint. If you do, the noise in the base value of the comparisons will be considerable, even if you use four or five comparisons. Using brighter stars reduces that noise. Make use of any possible advantage to get better results. Because you’ll be finding and allowing for the color differences between each comparison and the target, you don’t have to worry too much about matching colors of comparisons and target. However, don’t take that license too far. Working with comparisons that have a large color difference from the target is pushing your luck too much. Sometimes you have no choice. Many long period variables are very red and present unique problems. For one, most red stars are variable to one degree or another, making them bad candidates for comparison stars. 5.18.2
Measure the Images
For each time-series in a given filter, use your photometry software to find at least the raw instrumental magnitude and the air mass for each observation. If your software doesn’t record the air mass, record the date and time along with the right ascension and declination of the field so that the air mass can be computed. The images should be corrected with flat fields and darks before you measure them. I’ll say it here and again later: be careful when measuring your images. If you’re working with an asteroid, make sure that it doesn’t pass close to a faint field star that causes the combined magnitude to rise unexpectedly. In addition to the other rules for comparisons already outlined, make sure they don’t have close companions as well. If they do and there are no alternate stars available, make sure
70 Photometric Reductions
the measuring aperture is large enough to include the companion as well. Otherwise it may drift in and out of the aperture and cause the comparison star value to fluctuate dramatically. Again, be sure to plot the raw data for each comparison to check for variability. Try plotting the difference between any given comparison and the average of the others versus time as well. You should get a plot that is relatively flat. If not, then one or more of the comparisons is a variable. 5.18.3
Find the Standard Magnitudes
I’ll refer you again to the discussion about the differential formula on page 49 for processing each observation. The beauty of this method is that the first-order extinction and nightly zero-points drop out of the picture. Furthermore, the differential formula accounts for the color difference between each comparison and the target. The best part is that you need an observation in one and only one filter to derive the standard magnitude regardless of which filter was used. For each observation, use the differential formula and the necessary values to derive a standard magnitude for the target at the given time. The appendices show an example using a spreadsheet and data of a variable star. MPO Canopus and MPO PhotoRed work in combination to make this final step very easy. Fig. 5.8 shows data after it was imported from Canopus into PhotoRed and reduced to standard magnitudes. In and of itself, it’s a nice plot, but how accurate is it? Better yet, can the results be duplicated on subsequent nights as determined by the maximums and/or minimums having the same absolute magnitude (assuming they actually are the same each night)?
Figure 5.8 Data from a variable star discovered by the author. The period for the W UMa star is about 11.8 h. Several lightcurves were obtained and showed that the system might have total eclipses and large star spot activity.
Photometric Reductions 71
Figure 5.9 Plots from MPO PhotoRed of reduced data taken on the two nights following that shown in Fig. 5.8. The maximums and minimums in these two nights agree to within 0.02m of the first.
The raw data plots in Fig. 5.9 give the answer — yes! The same transform values found on the first night as well as the assumed first-order and nightly zero-points were used on for all three nights. It would have been better still to find the values each night.
5.19
The Different Path’s End
You have finished the journey down “the path less taken” of photometric reductions. There will be times when the traditional approach may prove better. So far, I haven’t run into very many situations where that’s the case. I do recommend that you do the additional reading that’s been suggested. The more you understand the ideas and practices behind photometry, the better you’ll be able to judge which approach is best under given circumstances. You’ll also understand the sources of errors better and where you can and cannot afford to make assumptions or try to save time. There’s a corollary to Murphy’s law that says, “There is never time to do anything right. There is always time to do it over.” I’ve discovered the truth of that all too often. In photometry, make time to do it right the first time around.
5.20
A Minimalist Approach
The method described above puts your raw instrumental magnitudes onto the Johnson–Cousins standard. There is an even more straightforward approach, one that many observers use to get close to a standard system. The errors can be a little larger and adjustments to put all the observations against a common zero-point may reach 0.05 m. However, in many cases, getting even that close initially is good enough to make the final tweaks to data from different nights (or observers) to make a lightcurve come together. What follows is a variation of what appeared in the first edition of the Practical Guide and follows an approach presented by Dr. Richard Binzel in a paper for
72 Photometric Reductions – A Minimalist Approach
the Minor Planet Bulletin. A search on the Astrophysics Data Service, http://adswww.harvard.edu/, will find his complete paper. My thanks to Dr. Binzel for allowing me to paraphrase his paper for this book. In the following steps, I’ll assume that you are using a Clear filter for your time-series work with the intent of converting to the Johnson V band. If you can’t get a sufficient SNR using a V filter for the calibration images of the target field but can with R, then put your observations on that standard band. Someone with a larger scope can determine the (V–R) value of the target and then use your observations to find the V values. This same general approach can be used for any standard filter, assuming that the observations are converted to the same standard color, e.g., converting B filter data to B standard magnitudes.
5.21
Using the Minimalist Approach for Standard Magnitudes
Before going through the necessary steps, there are some conditions that you must meet during this process. It’s better to cover them in general than hit them piecemeal while describing the individual steps. First, you need to pick a reference field that, if at all possible, is within 2–5° of the target field. This can be a Landolt or Henden field. You can also use one of the red stars from the blue–red pairs from the Hipparcos catalog that are listed in the appendices. The V magnitudes are very close to the Johnson V system, at least within a few hundredths of a magnitude, and certainly very usable for this purpose. Whichever catalog you use, you must select stars that are “solar colored,” since that is the approximate color of an asteroid. A good guide is that the star(s) have a catalog (V–R) magnitude of 0.3–0.7. Next, you must use a comparison star (or stars) in the target field that is also solar colored. Find the v–c (instrumental magnitude) values of the comparisons and asteroid and use comparisons with a v–c value similar to that of the asteroid. You must image the reference and asteroid fields at similar air masses. If you select a reference field within a couple of degrees of the target field, this will be just about anytime during the run. However, I recommend waiting until the fields are at least 45° high or at the meridian, whichever gives the lowest air mass value, in order to minimize extinction effects. If the fields are somewhat removed, then you need to determine a time when the reference and target fields have the same air mass. Preferably, this is within a few minutes’ time, so that you can assure that sky conditions did not change while moving from one field to the next. Now that you know the “musts,” you can start the observing run and reduction procedure. In the latter, compute all values to 0.001 m and use the catalog values to this precision. It’s not likely you’ll actually achieve this level of precision, but you’ll avoid “blocky” data due to round-off errors by using an extra decimal of precision in the calculations. Of course, the final results should use a precision appropriate to the quality of the data and the precision of the reduction values.
Photometric Reductions – A Minimalist Approach 73
5.21.1
Image the Target Field in Filter(s) of Interest
As always, choose an exposure that allows you to have a good SNR for the target and comparisons without going into saturation or non-linear response. If you’re using a reference field that is very close to the target field, you can jump to step 5.21.2 at any time. If the reference field is somewhat removed, then you must choose a time to shoot the reference field when it has the same air mass as when you take the standardization images for the target field in step 5.21.2. 5.21.2
Take the Standardization Images of the Reference and Targets Fields
At some point during the run, move to the reference field and take three images of sufficient exposure for good SNR values of the star(s) that you’ll use to standardize the observations. Quickly move back to the target field and immediately take a bracketing series of exposures. If you are using a C filter for your time-series work, the sequence would be CVCVCVC. It’s important that you alternate exposures and minimize the delay between exposures. Shorten the exposures, if possible, but keep a good SNR. If you’re using a standard filter for your observations, then you should shoot a quick series in that filter, e.g., VVV, again keeping the SNR usable. The reason for the quick execution of the series is so that you have a “snapshot” of the target, vital if it’s variable, at a fixed point in time. You don’t want the variable to change much in brightness during the set. 5.21.3
Complete the Imaging Run
Finish the observing run, i.e., until the asteroid is too low, twilight begins, or other circumstances prevent you from going on. 5.21.4
Measure the Differential Magnitudes for All Observations
Measure your target field observations just as you would normally in order to find the instrumental raw magnitudes for the comparisons and target. Then find the differential values, i.e., find the value of the comparison (or average of the several comparisons) and subtract that from the magnitude of the target, i.e.,
∆m = m o – m c where
∆m mo mc
(5.11)
Differential magnitude instrumental magnitude of the target average instrumental magnitude of comp(s)
Do this for both the C and V observations, putting aside the V differential magnitudes, which are used in a later step.
74 Photometric Reductions – A Minimalist Approach
5.21.5
Find the Average Differential Value of the Bracketed Exposures
Find the average of the bracketed Clear differential values. Call this value ∆Mv. It is very important that you keep the algebraic sign of this value, i.e., know whether the value is positive or negative. 5.21.6
Find the Standard Field Offset
For each V image of the reference field, find the difference between the catalog value and the measured instrumental value for one or two stars in the field, ∆Rs = Vs – Vm where
∆Rs Vs Vm
(5.12)
differential magnitude for given star V catalog value of star instrumental magnitude of the star
Sum all the values, i.e., all the ∆Rs values for each star and each image, and then find the average of those values. For example, say you took three images of the reference field and use two stars in that field. You would then have a total of six values, three Vs–Vm values for each of the two stars. Divide the sum of those values by six to find the average standard field offset. Call this value <∆Rs>, the brackets indicating an average value. In a perfect system, the value of ∆Rs should be exactly the same for all stars in all images when using the same filter and corresponding catalog entry. This is because for any difference of exactly 1.0 magnitude in the catalog, there should be a difference of exactly 1.0 magnitude in the instrumental magnitude. 5.21.7
Find Average V Instrumental Magnitude of the Target
Go back to the images of the target field taken with the V filter (step 5.21.2). Sum the instrumental magnitudes for the target and then find the average, i.e.,
< va >= (
N
∑ (v j )) / N
(5.13)
j =1
where
vj N
average V filter instrumental magnitude of target instrumental magnitude of observation j number of values
Photometric Reductions – A Minimalist Approach
75
Note that uses a lowercase “v”, meaning an instrumental magnitude. 5.21.8
Find the Target Anchor Magnitude
This is the standard V magnitude that anchors the asteroid to the V system. To do this, use the results of steps 5.21.6 and 5.21.7, i.e., Va = + <∆Rs>
(5.14)
Note that the value on the left uses the uppercase “V”, meaning it’s a standard magnitude. 5.21.9
Find the V Standard Shift
This penultimate step finds the value that you add to every C filter differential magnitude so that you then have a set of data that is on the standard V system. Use the values found in steps 5.21.5 and 5.21.8 to find the offset. Z = Va – ∆Mv
(5.15)
For example, given Va = 14.600 and ∆Mv = +0.250, then Z = 14.600 – (+0.250) = 14.350 5.21.10 Apply the V Standard Shift to the C Observation Finally, go back to the data of the C differential magnitudes found in 5.21.4. For each observation, add the value Z found in 5.21.9 to the C differential magnitude to produce a standard V magnitude. For example, V = +0.100 + 14.350 V = –0.245 + 14.350
= 14.450 = 14.105
and so on. 5.21.11 Summary The minimalist approach outlined above can yield observations reasonably close to a standard system. An important part of the process is to keep extinction effects to a minimum by measuring the reference and target fields when their air mass values are within 0.1 of each other. Also, be sure that you use reference and comparisons that are similar in color to the target. To see why this latter point is important, assume that your system has a V transform of –0.150*(V–R). If you use a star that has a (V–R) value 1.0 m differ-
76 Photometric Reductions – A Minimalist Approach
ent from that of the asteroid, then there will be a 0.15 m error in your reduced value. If you keep the difference to 0.1 m, then the error is 0.015 m. This is more acceptable but shows why this approach is probably good to no better than 0.03– 0.05 m for absolute precision. In many cases, this is adequate if working by yourself and trying to merge data from one night to the next. If you’re trying to merge data from other observers, the more formal approach should be used.
Chapter 6
Second Order Extinction You’re familiar with how the sun or moon appears reddened when near the horizon. This means less blue light is reaching your eyes than when the object is overhead. In the context of photometry, it means that blue objects “fade” faster as they approach the horizon than red objects. The second-order extinction term accounts for excess fading of a bluer object. The second-order extinction term is expressed a k″ (k double prime). Unlike first-order extinction, which is in units of magnitudes per air mass, the secondorder term is in units of magnitudes per air mass per magnitude of color index. By definition, k″ is 0 for U–B and V and, in practice, is used only with (B–V) observations. In reality, k″ is not exactly 0 for V, R, and I. However, it is close enough that the term can often be ignored. This is not necessarily the case for the B and C filters, especially the latter, which can have a significant blue response compared to the V filter.
6.1
Deriving a Single-Color Approach
The traditional second-order extinction formulae use color indices, meaning
′ = k ′b′ – k ′v′ k ′bv
(6.1)
Note that because k″v is nearly 0, then the final value of k″bv and k″b are essentially the same. The traditional formula to convert the raw instrumental magnitude to exoatmospheric becomes
v o = v – k ′X – k ′′(CI)X = v – (k ′ – k ′′CI)X
(6.2)
where CI is the color index, e.g., b–v. Note that all magnitudes are instrumental, including those for the color index. To find the second-order term, you would plot (CI) vs. CI(X). The slope of the linear regression is the second-order extinction term. Since working with single colors and deriving standard magnitudes via differential photometry is the approach used in this book, it becomes necessary to take a different look at second-order extinction terms. Using Eqs. (5.5) and (6.2), the full differential formula for V, using standard color index, becomes 77
78 Second-Order Extinction
V1 – V2 = ( v1 – k ′X1 – k ′′(CI1 )X1 + Tv (CI1 ) + ZPv ) – (v 2 – k ′X 2 – k ′′(CI 2 )X 2 + Tv (CI 2 ) + ZPv ))
(6.3)
with analogous formulae for B, R, and I. Assuming X1 ≅ X2, then rearranging terms gives gives Eq. (6.4) in which the first-order extinction and ZP terms both drop out. What remains is to examine the remaining second-order terms.
V1 – V2 = ( v1 – v 2 ) – k ′′((CI1 – CI 2 )(X1 – X 2 )) + Tv (CI1 – CI 2 )
(6.4)
From Eq. (6.4), you can see that if (X1 – X2) and/or (CI1 – CI2) is ≅ 0, then the entire second-order term is nearly 0. For example, assume k″b = –0.04 (a typical value for most systems) and that (CI1 – CI2) = 1.0, which is a fairly large color difference that should be avoided even with differential photometry. Even if working near the 30° altitude limit, the value of (X1 – X2) is likely ≤ 0.05 and so, at most, the net second-order term correction will be correction = –0.04(1.0)(0.05) = –0.002 This is likely far smaller than the precision to which you’re working and so can be easily ignored.
6.2
The Slope of Slopes Method
While the second-order term can be safely ignored in the final reduction step using the differential formula, what about other times, such as when finding the transforms and standard magnitudes of the comparison stars? In these cases, the second-order terms may or may not be important. For the moment assume that they are needed and that you need a method to find them that is compatible with the one-color approach for reductions used in this book. This can be easily done using a method developed by Robert Buchheim, with a slight modification on my part. The general idea of the second-order term is to relate the change in a differential magnitude between a blue and red star versus the difference in the color index of the stars times the air mass. The modification from the original Buchheim method is to substitute the standard color index for the instrumental index. This still maintains the principle behind the term. Maybe not too surprising is that the values are not much different from those found when using the traditional approach. Just as with the standard approach, Buchheim’s method calls for following a blue–red pair over a range of air masses. Then, for each star, you plot the instrumental magnitude, m, versus air mass, X. This is the same thing that you would do for finding first-order extinction but measuring only one star.
Second-Order Extinction
79
Buchheim has shown that the resulting slope actually contains the first-order and second-order terms. His paper is part of the 2005 proceedings for the Society for Astronomical Science, which are available at the society’s web site, http://www.socastrosci.org. In order to find the second-order term, you then find the slope of a line that has two points, X1 Y1 X2 Y2
: slope found using red star : standard color index of red star, e.g., (B–V) or (V–R) : slope found using blue star : standard color index of blue star, e.g., (B–V) or (V–R)
This approach has been called “slope of the slopes” and has the additional benefit that the Y-intercept of the line joining the two points is the value for first-order extinction.
Figure 6.1 Plot of C instrumental magnitudes vs. air mass for blue star.
Figure 6.2 Plot of C instrumental magnitudes vs. air mass for red star.
80 Second-Order Extinction
Fig. 6.1 and Fig 6.2 show the plots for the blue and red star, respectively, of a pair from the SDSS catalog listed in the appendices. The slopes of the two linear regression fits are shown in the plot and become the values X1 and X2, the slope being based on red–blue values. The color index values for the two stars were Star Red Blue
(B–V) 0.874 0.175
(V–R) 0.245 0.705
Using the (B–V) index, k″c = (0.2849 – 0.3036) / (0.874 – 0.175) k″c = –0.027 Using the (V–R) index k″c = (0.2849 – 0.3036) / (0.245 – 0.705) k″c = +0.041 Similarly small values were found when using the B filter. Those values were also similar to the k″bv found the by the traditional method. This is as it should be, since, as noted above, k″b dominates k″bv since k″v is nearly 0.
6.3
When Is the Second-Order Term Applied?
There are two times in the overall reduction process when second-order terms might be used. The first is when finding the transforms (in which case, you’d need to find the second-order terms before finding the transforms) and, more important, when finding the standard magnitude of the comparison stars. As noted earlier, ignoring first-order extinction has the net effect of changing the zero-point in the transforms equation but not the slope. The opposite is true for the second-order term, i.e., changing the value of the second-order term changes the slope of the transform equation but does not change the zero-point. This means that ignoring the second-order term may not allow you to apply a simple constant to all your data since that constant is dependent on the color of the object. To test the effects of ignoring versus including second-order extinction, I first found the value of the term for the Clear filter as shown in the example above. I then found the transforms using the (B–V) value of –0.027. As expected, this changed the value of the transformation slope: Transform w/o second-order: Transform w/second-order:
0.378(B–V) + 22.699 0.423(B–V) + 22.699
Second-Order Extinction
81
I then computed the magnitudes of several stars in each of two Henden reference fields, one at low (X = 1.078) and the other at high (X = 2.01) air mass. These values were compared to the catalog values and the average differences as well as standard deviations were found: Error w/o second-order: Error w/second-order:
0.000±0.009 m (low) 0.000±0.010 m (low)
0.000±0.022 m (high) 0.014±0.019 m (high)
From this, it’s not unreasonable to say that using the second-order term did not significantly alter the results nor did it improve the scatter in the errors. Indeed, if an extreme case of nearly a one magnitude difference in air mass and, on average, a color index of 0.4m did not make a significant difference, then applying the second-order term to a target field fairly close to the reference field would introduce acceptably small errors. What this does point out is that keeping all stars, reference and comparison, similar in color to the target greatly reduces the errors introduced by including or excluding the second-order extinction term.
6.4
Summary
There is an old adage that “Better is the enemy of good enough,” meaning that sometimes in the pursuit of perfection, one can spend far more time than the improved results justify and, in a worst case, not get any results at all. In the software business, it’s called “feature creep,” where in hopes of adding just one more feature that less than 1% of the users might find important, the software is never released. Given the above, second-order terms will not be included in the reductions in this book. However, you should be aware of them, when they might come into play, and just how they can affect your final results. For a more complete discussion of second-order terms, especially in light of the more traditional approach, I’ll again refer you to the Henden and Kaitchuck book.
Chapter 7
Telescopes and Cameras What follows are my thoughts on what equipment you need to get started with gathering data for lightcurve photometry and analysis. Most of this may already be familiar to you. However, I often get questions asking what's needed to get into lightcurve work, even from those who have equipment of which I’m sometimes envious. So, I’ll throw in my bit about what I think is important. Then you should ask others what they think is important. Then you should read what other people think is important. Only then are you probably ready to make your purchase. If you don’t have some of the equipment yet but have the opportunity to see it in action by attending a club meeting or star party, then make the most of that opportunity. One of the great tragedies in this hobby is hearing of scopes and cameras going idle because someone got in over his head. Frustration was followed by disappointment, which was followed by lack of interest. If you want to burn up the Spousal Permission Units fast, spend thousands of dollars on equipment and have it collecting dust in the corner three months later.
7.1
The Telescope
Among the most common types of telescope used by amateurs is the SchmidtCassegrain (SCT). Meade and Celestron are well-known companies that produce, in my opinion, very fine telescopes. The debates go on endlessly about which is better, how each could be better, and so on. I won’t get into those issues here – or any place, really. I do know that what’s available today far surpasses what was available for any price not that many years ago. We should count our blessings from time to time. Unless you can make your own mirrors and lenses, you’re in the hands of the manufacturer. In general, the quality of optics is not in question, but there are still things you can do to get the most of the optical system. 7.1.1
Alignment
You want your telescope to put the incoming light of a star into the smallest possible point. That’s going to happen only if the optics are well aligned. If you have a Newtonian telescope, there are several laser alignment tools that can help get a good alignment. For the most part, these tools don’t work with Schmidt–Cassegrain, Ritchey– Chretien, or classical Cassegrain optics. For these, you must use other methods, 83
84 Telescopes and Cameras – The Telescope
the most popular being observing an out-of-focus star. A good book on this topic is Star Testing Astronomical Telescopes. See the listing in the Bibliography. Take the time to make sure your optics are well aligned. The quality of the images can be improved dramatically with only a few minutes’ effort. 7.1.2
Baffling
CCDs are many times more sensitive than your eyes and any stray light can and does affect the results. A dew shield on an SCT serves two purposes, cutting out stray light and helping prevent dew from forming on the front corrector plate. Some use small heat tapes of one design or another in lieu of the dew cap. I favor both on the worst of nights and the cap alone under less severe conditions, simply because the cap does serve as a first defense against stray light. If you have an open tube or truss scope, wrap it with a black shroud. The shroud keeps out stray light that may interfere with seeing very faint objects. 7.1.3
Mirror Flop
Many commercial SCT telescopes have a trait called “mirror flop.” This is caused by the design of the focusing mechanism in which the main mirror is moved back and forth on a track system. As the mirror moves, it can twist in one direction or another so that the image appears to shift. This flop can occur even without focusing. As the scope moves through the course of the night, it may get to a point where gravity causes the mirror to shift slightly. The most recent versions of the SCTs have mirror locks. Once you get a good focus, you can lock down the mirror and then rely on an electronic focuser to do the minor adjustments for the night. These locks are available as accessory kits for the older models as well. They are not expensive and well worth the time and effort to install. 7.1.4
Mount
A long time ago a friend joked about how all my homemade telescopes had “wobbly mounts.” If you don’t have a solid mount, with good tracking and without the “wobbles,” you’re going to have a difficult time getting good photometry. The light from the target and comp stars will be spread out over more pixels, which decreases the SNR, and makes aperture photometry less precise. 7.1.5
Balance
It’s important that your telescope be properly balanced. Do this with all the equipment that you’ll be using in place. It won’t do to balance the tube and then add your three-pound camera. Some people discovered that loading the balance to one direction or the other in RA helps tracking. You’ll have to experiment to test
Telescopes and Cameras – The Telescope 85
this. Regardless, you don’t want a severe imbalance that would put too much strain on the motors when the scope needs to move. 7.1.6
Permanent Setup
If at all possible, get a permanent setup for your telescope, even if it’s a small fliptop lid so that you can leave the telescope in place. If you can’t leave the scope outside, try to have a pier with the equatorial wedge (assuming a commercial SCT) permanently mounted. You probably won’t be able to get a perfect reset on the polar alignment but it will usually be close enough to suffice for exposures of two, maybe three minutes. I used to go through this routine with one of my SCTs until I decided that hoisting it up and down each night, especially the down at the end of the night, was too inconvenient. The scope is now within the friendly confines of an observatory, which provides the additional benefits of security and wind/light block. 7.1.7
Polar Alignment
Before you can worry about how well your scope tracks, you need to give it a fighting chance. That means having an accurate polar alignment. Without a good alignment, the field drifts in declination and rotates. If a star in the field doesn’t stay on the same set of pixels for at least as long as your longest exposure, you’re going to get trailing. The drift method is still the best for achieving high-precision alignment. There are several resources on the Internet that explain the process in detail. The following steps outline the procedure for Northern Hemisphere observers. Southern observers, see the note at the end. As you go through the steps, make sure you know which way is north. It isn’t always “up.” Many of those using a camera on commercially made SCTs have the camera rotated 180° so that they can get closer to the pole. In that case, “up” is often “south” (for Northern Hemisphere observers). The same can be said about north–south orientation if you’re using a typical right-angle diagonal. 1. Aim the telescope at a star ±5° of the celestial equator near the meridian. 2. Take an image and record the Y position of the star (assuming you have the field with north/south parallel to the Y-axis). 3. Wait five minutes and take another image. 4. Measure the Y position of the same star. If the star drifts north, the northern end of the polar axis is too far west of the pole. If the star drifts south, the northern end of the polar axis is too far east.
86 Telescopes and Cameras – The Telescope
5. Repeat the above steps until the star does not drift more than one-half pixel for at least the length of your longest planned exposure. 6. Aim the telescope at a star ±5° of the equator and about 20°–30° above the eastern horizon. 7. Repeat the steps of taking an initial image and a second image about five minutes later. If the star drifts north, the northern axis of the scope is pointing above the pole, i.e., too high. If the star drifts south, the northern axis is pointing below the pole, or too low. Southern Hemisphere observers need to reverse which end of the axis is being considered. For example, if you are near the meridian and the star drifts north, then the southern end of the polar axis is too far east. 7.1.8
Tracking
Two-minute unguided images should be within the capability of a well-tuned commercial scope. If you can afford one of the expensive mounts with very high precision tracking, that’s all the better. For some of us, there will never be enough SPUs (Spousal Permission Units) to afford that luxury and so we must find other methods to improve tracking. If it’s available on your mount, take time to train the Periodic Error Correction (PEC). Usually, you can make an initial run followed by additional runs to refine the corrections stored by the scope. While less convenient, I found that doing manual guiding through a reticle eyepiece produced better results than when using a guiding camera to apply the corrections. Maybe it’s all those years of doing manual guiding for getting images of asteroids on film for astrometry. 7.1.9
Automation
“To sleep. To dream.” The unfortunate Hamlet’s musings have a lighter and less philosophical meaning for today’s photometrist. It’s entirely possible to run a lightcurve program using a telescope that cannot be automated. It would mean being at the scope (or having remote access to the computer running the scope and camera control software) and manually telling the equipment what to do. That may be interesting for awhile but would soon get tiresome and tiring. I wouldn’t trade my automated scope for anything, mostly because I got tired of spending hours in sub-zero temperatures in the mountains of Colorado. A more practical feature of automation is that one can set the system into operation and retire for the evening. When morning comes, the scope is in its home position and, if you’ve really gone to town, the roll-off roof or dome is closed. You can then analyze your images at your leisure. Also, you can move the scope to more than one target at a time, taking an image of the first, moving to the sec-
Telescopes and Cameras – The Telescope 87
ond, taking an image of it, and then returning to the first target to start the cycle over. You can even throw in side trips to a standard field to get filtered images for photometric reductions. It really is amazing what can be and is being done these days. The only real concern about an automation system these days is that it doesn’t take off on its own. It’s no fun to go out to find the scope has wrapped itself around the RA axis until it couldn’t go any further but still tried. When you can smell the burned-out electronics from thirty feet away, you know you have problems. For those with automated domes or roofs, the dome shutter or roof may not open as hoped or may close before the telescope was in a safe position. The possibilities are many but also very remote. Automation is the only way for those with day jobs to get much done. 7.1.10
Remote Focusing
If at all possible, purchase a remote focuser for your telescope, preferably one with a stepper motor that controls the drawtube holding the camera. There are several popular brands: RoboFocus, Optec, and FLI are just some. The RoboFocus can be used on the focuser knob of commercial SCTs, but there is the chance of mirror flop when doing this. The DC motor focusers such as supplied on the Meade GPS line and JMI, among others, are quite usable, though without the stepper motor control the ability to repeat focus positioning is much less reliable. The RoboFocus and Optec focusers, among others, include temperature compensation. After training the focuser for the best focus position for a given temperature, the software or firmware can take over and keep focus without having to take a series of images. This is an important feature for long runs, especially when the temperature tends to change significantly during the night. In the height of summer in desert locations, the daytime vs. nighttime air temperature can differ by 40°F or more. As the air and scope cool down, the focus changes dramatically. It’s very difficult to get good photometry when the focus change has turned stars into donuts.
7.2
The CCD Camera
There are many cameras available these days. I won’t recommend one particular brand over another. Instead, I’ll give you some of the factors I think you should consider. Then it’s up to you to do the research to determine which camera best suits your requirements and, most important, your budget. If you’re just getting started, don’t rush out to buy the biggest, meanest, most feature-packed camera you can find. I’ve heard of some people getting large-format cameras, filter wheels, and a complete set of filters and then ask if they have what they need for photometry or pretty pictures. Know what you want and why you want it. Then you can make an informed decision and be happy with it. If you want to move up
88 Telescopes and Cameras – The Camera
someday, the odds are very good you can get a good price for your used camera – or put it on a second scope! 7.2.1
Pixel Size
Image scale is one of the more important considerations when buying a camera. Sometimes you can achieve the best match by binning, which is where the camera and/or software combines groups of pixels into a single pixel: 1X1 binning means there is no grouping, 2X2 binning takes a rectangle of four contiguous pixels and combines them into a single pixel. This increases the effective pixel size and sometimes helps the signal-to-noise ratio (SNR) by putting the light from the star on fewer pixels, meaning fewer pixels within the area being measured to contribute to background noise. This is not always the best plan (see the discussion on pixel matching on page 36). The oft-heard rule of 2 arcseconds per pixel is not the best rule. Instead, you need to consider the average seeing in your area, expressed in arcseconds, and then determine what pixel size is needed to get no less than two pixels coverage. If your seeing is about 4 arcseconds, then the 2 arcseconds/pixel rule works. If the seeing is 1 arcsecond, then pixels of 0.4–0.5 arcseconds in size are a better match. If you’re planning imaging other than photometry, you may need to compromise. For example, you can get a camera with 9-micron pixels that will be good for planetary imaging and then bin to 18- or even 27-micron pixels for regular photometry work. Sometimes there is no good compromise, i.e., when 1X1 binning the pixels are too small and when 2X2 binning they are too large. When in doubt, use a pixel size that over-samples or use the technique of defocusing the image just a little, which makes the stars cover more pixels (the idea is to completely cover at least two pixels). As always, moderation is the name of the game. Don’t make donuts out of stars.
FL in.
"/ mm
7u pix
9u pix
20u pix
24u pix
FOV 7 2184pix
FOV 9 765pix
FOV 20 512pix
FOV 24 1Kpix
40
203.0
1.42
1.83
4.06
4.87
51.73
23.30
34.65
83.16
60
135.3
0.94
1.22
2.71
3.25
34.49
15.53
23.10
55.44
80
101.5
0.71
0.91
2.03
2.44
25.86
11.65
17.32
41.58
100
81.2
0.57
0.73
1.62
1.95
20.69
9.32
13.86
33.26
120
67.7
0.47
0.61
1.35
1.62
17.24
7.77
11.55
27.72
140
58.0
0.41
0.52
1.16
1.39
14.78
6.66
9.90
23.76
162
50.1
0.35
0.45
1.00
1.20
12.77
5.75
8.56
20.53
Table 7.1 The pixel scale for four common pixel sizes and various focal lengths. The pixel scales are in arcseconds while the field of view is in arcminutes. For the 7- and 9-micron chips, which are usually rectangular, the field of view for the long side is given.
Telescopes and Cameras – The Camera
89
Table 7.1 gives the pixel scale for some common combinations of focal length, the scale in arcseconds/mm, the size in arcseconds at 1x1binning, and the field of view for the physical chip. For the 9-micron example, the assumption is the popular 765X512 chip and FOV is for the long side. For it’s double-sized sibling, 1530 X1024 pixels, the field of view doubles but the pixel sizes and scales remain the same. To get the 2X2 binning scale of the chips, double the 1x1 values. A last thought on pixel scale matching: pixels vary in sensitivity not only from one to the other but even within the area of a single pixel. If you use pixels that are too small, i.e., the star is concentrated on only one pixel or the star straddles two pixels but doesn’t cover both completely, that can affect the precision and repeatability of the photometry on the star. Given this, it’s better to favor oversampling an image, using pixels that are a little too small according to the rule of thumb and so have the star touch or cover more pixels. I had a friend who complained that his photometry when binning 2X2 wasn’t as good as when not binning at all (1X1). This issue was part of the cause. 7.2.2
Field of View
Field of View (FOV) goes hand in hand with pixel size and focal length. Matching the pixel size is, I believe, the more important consideration. Your most practical option regarding field of view that doesn’t get you in a circular trap of matching pixel size is getting a camera with a larger chip but the same sized pixels. If you go from a 512X512 array to 1KX1K, you double the field of view. Naturally, the larger chips are more expensive. In general, I recommend that you have a FOV no less than 10 arcminutes. Less than that and you often have trouble finding enough stars to match that of your target. Consider also that, on average, a main belt asteroid moves about 0.25° a day. That’s about 0.625 arcminutes per hour. In an eight-hour run, the asteroid moves about five arcminutes, or about half that ten-arcminute field. If you have a camera with a smaller field, you can still do good photometry. If working a moving target, you may have to break up the night into several sessions, using a different set of comparison stars for each session. For a fixed target, you have to hope that at least one good comparison star can be found in the field; otherwise you won’t be able to do true differential photometry. 7.2.3
Focal Reducers
This topic is here instead of under telescopes because reducers are used to change the field of view and/or match pixel size. While I believe that matching pixel size is very important, I do not recommend using a reducer to achieve that end. Reducers introduce another set of glass that absorbs and scatters light. They usually create a strong vignetting effect, where the edge of the field receives measurably less light than the center. Flat fielding can usually take care of vignetting, but you will have to make sure that you get very good flats.
90 Telescopes and Cameras – The Camera
7.2.4
Anti-Blooming Versus Non-Anti-Blooming Chip
The great thing about CCD cameras for photometry is the wide range over which the response of the camera is linear, i.e., the number of electrons generated in each pixel is proportional to the number of photons received. That’s not to say that every pixel has the exact same response but that, for example, a given pixel generates double the number of electrons for double the number of photons. Anti-blooming is a feature built into the chip that prevents individual pixels from being over-saturated by draining off excess electrons. This makes for nicerlooking pictures when bright stars are in the image because you don’t see the familiar “chimney pipe” effect – that long vertical streak above and below a star. The downside for photometry is that the pixel value no longer represents the actual number of photons that hit the pixel, i.e., the response is no longer linear. Also, because of the design to make an ABG chip, many are about 30% less efficient at collecting photons. Usually the draining effect in an ABG chip starts when pixels are at about 50% saturation. This means that if you keep your critical objects, i.e., the comparison stars and target, below this point, you can still use an ABG (anti-blooming gate) camera for photometry. However, there is still that loss of efficiency. The NABG cameras don’t siphon off electrons and so you quite often see the chimney pipe effect. While not so good for pretty pictures, the gain in efficiency and the much wider range of linearity are of considerable value to good photometry. 7.2.5
ADU Bits, Gain, Full Well Depth, and Linearity
Do not make the assumption that because you have an NABG chip that you can expose up to the ADU limit of the camera. What did he just say? ADU stands for analog-to-digital units. When the camera reads off the number of electrons, the ADU converter in the camera changes this number to one that matches the number of gray scales the manufacturer allows. For example, a 16-bit camera allows a range of 0 to 65,535 gray-scale values. An 8-bit camera allows a range of 0 to 255. That means pure black is 0 and pure white is the maximum value. Actually, pure black is usually a small positive number. This “bias” is used to avoid numbers from going negative when no photons are received. The gain, ADU conversion factor, and full well depth combine to determine whether your camera is accurately representing the number of electrons received. The gain is often expressed as in terms of e–/ADU. This is the number of electrons it takes to generate a value of 1 ADU. A typical gain is 2.3 e–/ADU. The ADU conversion factor is the number of discrete values that can be represented, e.g., 256 or 65,536. The full well depth is the maximum number of electrons that can be stored by a given pixel. If any more photons strike the pixel, then the resulting electrons spill over into adjacent pixels (or are drained off in an anti-blooming chip) and are not counted. ADU saturation is reached when
Telescopes and Cameras – The Camera
ADUSat = MaxValue * gain
91
(7.1)
For example, if your camera has a 16-bit ADU, i.e., the maximum value is 65535, and a gain of 2.3 e–/ADU, then ADUSat = 65,535 * 2.3 = 150730. If more than 150,730 electrons are generated by any pixel, the ADU converter cannot accurately report the photon count. Full well saturation is tied to the gain and ADU by MaxADU = Full well depth / gain
(7.2)
In the example camera above, say the chip being used has a full well depth of 85,000 – a typical value. Then MaxADU = 85,000 / 2.3 = 36,956. As you can see, this is much less than the maximum ADU value of 65,535. The best setup for a camera is to set the gain and/or the number of bits in ADU converter so that full well depth and ADU saturation are reached nearly simultaneously. Even if full well depth and ADU saturation are not reached, that still does not mean that your camera responds linearly across the full range of ADU values. In fact, it may not be linear at a point well before either saturation point. There would nothing to alert you to this, such as ADU values at the ADU maximum or bleeding stars. You would be left to wonder why your photometry, especially on brighter targets, was neither precise nor accurate. A simple test for linearity is to take a series of several images of a field containing stars with a good range of magnitudes at increasing exposures. Each exposure doubles the previous one, e.g., take 3–5 images each at 1, 2, 4, 8, and 16 seconds. Since each exposure is twice the previous one, then it should collect twice as many electrons. Find the average of the maximum ADU values for each star at a given exposure, not the average of the total values. Then plot those averages versus exposure time. There should be a straight line joining the data points for each star until saturation is reached. The corresponding maximum ADU value where the data points deviate from the line is where linearity breaks down. You should not measure any object with one or more pixels that exceed this value. The reason you don’t use the total of all pixel values for a star is that that one or more pixels might be saturated and so skew the results. I’ll refer you to the Berry and Burnell or Howell books for ways to determine the true e–/ADU (gain) for your camera. 7.2.6
Front-Illuminated Versus Back-Illuminated CCD Chips
Fig. 7.1 shows the two basic ways CCD chips are constructed. On the left is a front-illuminated chip. Here, the incoming light strikes the gates, which are the tiny blocks on the SiO2 insulation layer that are required for creating the charge that allows the device to convert photons into electrons. The problem is the photons must pass through these gates before they can be converted to electrons. The gates act as a mask, particularly in the blue regions, where they are often opaque.
92 Telescopes and Cameras – The Camera
Figure 7.1 The basic construction of CCD chips. On the left, the light must pass through the mostly opaque gates before reaching the substrate. This makes the front-illuminated chip less efficient that the back-illuminated chip.
Figure 7.2 Typical QE curves. The plot shows the quantum efficiency of several types of CCD chips. The QE measures the percentage of photons converted into electrons. Back-illuminated chips have the highest QE.
Figure 7.3 The structure of a micro-lens CCD chip. Tiny lenses are placed on top of the chip, straddling the opaque gates. The lenses redirect the incoming light into the individual light-sensitive wells.
Telescopes and Cameras – The Camera
93
On the right is a back-illuminated chip. While the light must pass through the substrate, the substrate is usually much thinner than for the front-illuminated chip. Most important is that the photons do not have to pass through the gates. As you might guess, the back-illuminated chip is more sensitive than the front-illuminated chip. This is quite apparent in Fig. 7.2, where the back-illuminated chip has a higher quantum efficiency, i.e., more of the photons hitting the chip are converted to electrons. From this, you might presume that you would automatically get a backilluminated chip camera. As always, there’s a price for everything and that certainly applies to back-illuminated chip cameras. A camera with a back-illuminated chip of the same size and pixel count as its front-illuminated counterpart is usually much more expensive. A back-illuminated chip often suffers from a set of fine interference lines in the image, especially in the red and infrared bands since this is where the chip is most efficient. This fringing is caused by the thinned chip having just the right thickness such that faint emission line glow in the night sky causes a Newton’s Rings type pattern. Flat fields can’t always remove the interference patterns. An IR blocking filter in the optical path is usually the only solution. If you can afford the cost, including the extra IR filter, I would recommend the back-illuminated chip camera. However, you must factor in all other considerations. If faced with the option to get a back-illuminated camera or one with a blue enhanced chip that’s twice the size and about the same price, I’d definitely go with the larger chip. That’s me. You have to decide what’s right for you. Finally, there are the new micro-lens chips, which is where tiny lenses are placed on over the imaging surface to redirect some of the light that would hit the opaque sections into the light-sensitive wells (see Fig. 7.3). These lenses are optimized for f/6–f/10. The effects that the lenses have on photometric quality and at faster f/-ratios are the subject of several studies. So far, there is no reason to believe that cameras with these chips are any less useful for photometry than those with chips that do not have the micro-lenses.
7.2.7
Download Speed
Most new cameras built specifically for astronomy in the past few years have USB (Universal Serial Bus) connectors. USB camera downloads are very fast compared to those with the older parallel port cameras. With these faster cameras, there is little to say about improving download speeds. The only concern with USB cameras in my mind, and it’s minor, is that USB itself allows cable runs of only 15 feet or so. There are a variety of extenders, getting less expensive all the time, that get around this difficulty. Why is download speed important? Rapid imaging is often required when monitoring certain kinds of variable stars or novae and supernovae. It’s not so necessary when working asteroids, save in those rare cases where the asteroid ro-
94 Telescopes and Cameras – The Camera
tates once every 10–30 minutes. There, a 30-second download becomes a significant part of the rotation period. For those with parallel port cameras, there are some tricks to allow you to take more pictures in less time. 1. Use binning to decrease the number of pixels. An image binned 2x2 has one-fourth the number of pixels. Keep in mind the need to match pixel size to image scale and seeing. Sometimes better or poorer seeing dictates more what binning size, and so what effective pixel size, you’ll use. 2. Download only a portion of the frame. If you can download only half the frame and still have enough comparisons and the target, that cuts the download time by a factor of two. 7.2.8
Temperature Control/Regulation
Many, but not all, cameras have built in cooling and temperature regulation that keeps the CCD chip at a nearly constant temperature well below that of the ambient conditions. This is important because cooling the CCD chip reduces its native thermal noise. Dark frames remove only the bias and the thermal noise in the camera at a given temperature, meaning they also have a minimum noise level. The noise level in the processed image created by subtracting a dark from the target image will be no lower than the dark’s minimum level, a level that may be so high as to prevent imaging faint targets or getting good SNR levels. You don’t want the noise level changing during a session where you hope to get 0.01 m precision photometry. If your camera doesn’t regulate the temperature, then any given image may have significantly different thermal noise than what’s in the dark frame. There are several methods of cooling the chip beyond the usual thermoelectric coolers included in many cameras. The one that most likely found outside a large facility is liquid-assist, which is where tubes are run between the camera and a reservoir, usually filled with water. I find those extra tubes a hassle, and there’s always the danger of spilling the coolant. I do not generally recommend using the method nor to make it a priority when considering a camera. However, for those living in hot desert locations, using every possible means to cool the camera is sometimes required. 7.2.9
Software Compatibility
If you’re going to automate your observing program, you need to consider if the camera operations can be scripted by the camera control software. At the very least, you hope that the camera drivers allow you to use Visual Basic or COM scripting to access and control the camera. The most popular brands of cameras such as FLI, SBIG, and Apogee are easily controlled by a number of software
Telescopes and Cameras – The Camera
95
packages, including MPO Connections. The Starlight cameras are very popular, especially outside the United States. Not as many software packages can control those cameras, but there are some. Do some research and consider how you’re going to run your observing program both in the near and intermediate future. I don’t say long-term because technology is changing too quickly these days. 7.2.10
Support
I mention this only to say that the support from all the well-known manufacturers has been excellent for me. However, do keep an eye on the newsgroups to see what issues tend to come up and how the companies respond. One company’s approach may appeal to you for one reason or another. Technical issues aside, you need to feel comfortable with your decision. That includes the “warm fuzzies” as much as anything else.
7.3
Digital and Web Cameras
Now a few words about the digital and web cameras that are becoming ever more popular. Frankly, many of them may not suited for accurate photometry, at least in terms of finding accurate absolute magnitudes. Even differential photometry might be poor at best. It depends on the type of camera you have. The lower-end digital cameras have several problems when it comes to photometry. If the ADU converter is limited to 8–14 bits, maximum values of 256 to 16,387, then the digitization noise is too high. Go back to the idea of full well depth. If a camera has a full well depth of 100,000 and 8-bit ADU converter, then each ADU step = 100,000 / 256 = 390.6 electrons. This may not seem to be a problem but when you realize that photometry is done based on the number of electrons, not ADU units, then a difference of only a few ADU could mean a significant number of electrons in proportion to the total count. In other words, the conversion steps are too coarse. This is digitization noise – or at least one definition of it. Another problem is the file format. Many cameras produce only compressed files, such as JPEG or MPEG. Even 24-bit TIFF images have some compression. This means that some of the original data is lost. You can’t do accurate photometry on “lossy data”. Quite often the cameras apply a white balancing of some sort. This alters the color response of the individual pixels and may make some stars fainter or brighter than before the correction. Of course, this is true with photometry filters but those are on a known system and the changes can be accurately predicted. Just like the humming bird and bumblebee were not supposed to be able to fly – yet there they are, buzzing about my house – some people have shown that the impossible is possible. One of those is Kevin Alton, who has used a modified web cam (SAC-7) in 8-bit monochrome mode to produce lightcurves of eclipsing vari-
96 Telescopes and Cameras – The Camera
ables with 0.05 m precision down to 13.5 m by stacking a set of images. He placed a Johnson V filter in front of the camera and found a very linear fit to the magnitudes measured versus the V magnitudes from a Landolt field. The one factor he did not include in his original paper was the color of those stars. Further tests are needed to determine and account for color dependencies. To read his paper off the web do a Google search that includes “Basic Webcam Photometry” and “Kevin Alton.” You should find a hit for it on the CloudyNights web site.
7.4
Filter Wheels
Your plans should include a filter wheel at some point. You can make a gradual transition from all clear observations by getting only a V filter and inserting it into the optical path. Many filters have threads that fit into the nosepiece of a CCD camera. This gets you started, though going to and from unfiltered conditions is not convenient – remember that if you move the camera for any reason, you really should shoot your flat fields again. A simple slider allows you some flexibility and convenience, though you must be at the telescope to change filters. That’s not a very good long-term solution if sleep is a part of your observing plan. There are many good filter wheels available that can be controlled via software and, therefore, remotely. Among these are those from SBIG, FLI, Optec, True Technologies, and DFM. At the least, you should get a wheel/slider that allows three positions: Clear, V, and one other filter. I use R but you may want to use a B or I filter. Keep in mind that even with a blue-enhanced chip the efficiency in the blue region is still relatively low. Generally, you’ll need much longer exposures in B than in V and even more so than R or I. If for nothing else, the Clear filter in lieu of an empty slot keeps the focus change to a minimum when going from filtered to unfiltered imaging. The filters you get should have the same optical thickness, meaning the focus should not change when switching from one to another (it always does but not by as much). Some programs include the ability to adjust the focuser whenever the filter is changed. I’ve found this feature to come in very handy and something you should consider as a plus when selecting your camera control software. Keep in mind that if you have a camera with a larger chip, e.g., a 1KX1K that’s about one inch on a side, you need a wheel that holds 2-inch filters. That’s because the filters are inserted into the optical path before the chip, sometimes by several inches. You need a filter large enough so that the incoming cone of light is not truncated (vignetted). Of course, 2-inch filters are much more expensive than the traditional 1.25-inch filters. However, you don’t have to get an entire CBVRI filter set right off the bat. Start with Clear and V and then get others as the budget recovers.
Telescopes and Cameras – The Camera
7.5
97
Guiding Considerations
In most cases, I don’t guide when doing lightcurve work. It can needlessly complicate the imaging process. Your scope should be able to handle tracking for as long as the most common exposures. If the mount can’t meet this standard, then you should spend time doing what’s needed to bring it up to standard. If you’re very daring, you can say this was an unexpected fatal flaw and so claim justification to the family for that very expensive mount. Good luck on that! If you are trying to chase fainter objects and guiding becomes necessary, then you have two options — to guide with an “on-board” chip on the imaging camera or to use a separate guide scope. Something I consider important when considering guiding with the “on-board” chip is that the chip is often behind any filter you might be using. That could make finding guide stars bright enough for guiding very difficult. Also, if the imaging camera closes its shutter while downloading an image and/or waiting for the next image, then it’s certainly not possible to guide. The advantage of having a separate guide scope is that you can have a much wider field of view and so almost guarantee having a guide star available. This solution has its own set of problems, especially if you’re using a commercial SCT for imaging the target and it has mirror flop. The guide scope can’t keep up with the image shift due to a moving primary and at least one image will be lost when the primary mirror moves. There is also the issue of differential flexure, where the optical axes of the main scope and that of the guider do not keep the same relative alignment throughout the exposure. This causes the guiding camera to try to correct for motions that are not the result of the main scope’s tracking errors. Despite these issues, many people use a separate guide scope and camera with considerable success. Again, most of these issues are moot if your mount can track well. Work on that first and foremost. Then worry about whether you need to guide because of the nature of the target and not because the mount can’t do its job.
Chapter 8
Imaging and Photometry Software 8.1
Image Acquisition Software
There are many excellent packages for controlling your camera and/or telescope, regardless whether you are automating your system. Here’s a small listing — Telescope Control MPO Connections (Bdw Publishing) TheSky (Software Bisque) Astronomer’s Control Panel (ACP - DC3 Dreams) Starry Night Pro (Space Software) Camera Control MPO Connections (Bdw Publishing) CCDSoft (Software Bisque) MaxIm DL (Diffraction Limited) AstroArt (MSB Software) Some programs provide the basics while others go all out and include scripting capabilities or can be scripted using Visual Basic or COM programming. The program you’re running and how you want to run it will dictate which software is right for you. 8.1.1
Ease of Use
This is really the most important issue. I use the rule that says that the best program, for whatever use, is the one you like and use. It doesn’t matter if another program has more features. If they are difficult to learn and/or use or are not important to you, then you’ll have wasted time and money and maybe even become discouraged enough to give up on lightcurve work entirely. On the other hand, you need to keep in mind that programmers are placed in no small bind by the users of their software who want more options and features – yet still want the program easy to use. It’s like asking a city to pave all the roads more often but not to raise taxes to do it. There is a price to be paid for enhanced programs – complexity. Use the technical support line. Take advantage of the many newsgroups set up for users of a given product to get you though any sticky 99
100 Imaging and Photometry Software
parts of the learning curve. You may find that a program is not so hard after all and you’ll meet a lot of great people who are willing to help and who have built a wealth of knowledge on a number of subjects. 8.1.2
One or Several Programs
One-stop shopping is convenient but you don’t always get the best choice. It is certainly easier in many regards to have one program control your entire imaging system. You don’t have to worry about multiple setups or switching among programs. The compromise may be that the features are not as complete as when using several programs, each dedicated to a specific purpose. An excellent example of a suite of programs for image acquisition is the one from Software Bisque using TheSky, CCDSoft, and – optionally – Orchestrate. TheSky is an excellent planetarium program that includes telescope control for a number of different mounts and can also control some automated observatory domes. CCDSoft controls a large number of cameras, filter wheels, and focusers. It also includes many image-processing features and will perform automated astrometry and photometry. For the “one stop shopper,” MPO Connections allows control of the telescope, filter wheel, focuser, and camera within one program. The number of supported mounts and cameras is not as great, but the most common ones are included. Connections includes a scripting engine of its own, similar to Orchestrate, with the exception that one cannot control it (or the rest of the program) using Visual Basic or COM programming. However, the list of available commands is rather extensive and handles the needs of just about any lightcurve program. MPO Connections does not do any image processing per se. That and astrometry/photometry are left to a separate program, MPO Canopus. These are only two considerations. You need to visit the vendor sites listed above and others to compare the available features. Ask questions of the software providers. Join the support newsgroups and ask questions there. It’s much better to spend a bit of time researching the software to see if it’s right for you before you buy it.
8.2
Specific Features
There are several specific features that I consider important to lightcurve work and that should be in the image acquisition program or programs that you select. You may have some additional requirements. 8.2.1
Ability To take Multiple Images at Fixed Intervals
Lightcurve work requires that you obtain a series of images over time. The acquisition software should be able to take an image, store it under a unique name, wait
Imaging and Photometry Software 101
a specific time, and the repeat the process until it is told to stop because the target is too low, twilight has started, or for another reason. The last part has more to do with your getting sleep than getting images so give it the weight you think it deserves. 8.2.2
Ability to Change Filters
If you are using or plan to use filters, the software must be able to change the filters, assuming you have a wheel that can be remotely controlled. Better yet, is if the software can change the filters within a series of exposures that make a single master exposure. For example, say you’re working a variable star in B, V, and R. For each exposure set you want to take three images through each filter. Can the software do this? Can it use a different exposure time for each filter? Can it be told the order of the exposures, e.g., BVRBVRBVR, BBBVVVRRR, or some other combination? Some software packages allow the focus to be changed for each filter to account for differences in optical depth. This may be critical if your filter set has a wide range of focus positions, and it is worth considering. 8.2.3
Ability To Keep the Target Centered
If you’re working a variable star, there’s no worry about it moving during your observing run. An asteroid is a different story. Even a average main-belt asteroid is going to move about 5 arcminutes during an eight-hour run. Can the telescope program periodically confirm the true pointing location of the telescope and, if necessary, correct that position? Unless you have a very accurate mount and precise polar alignment, the field can wander some during the course of an eight-hour run. Some programs have a feature that uses a recent image to perform automatic astrometry and determine the current position of the scope. If the position is not within a certain distance of where the scope should be, the program moves the scope to the correct position. This autosync of the scope’s position can be used with periodic commands to center the asteroid, thus keeping the asteroid nearly centered throughout the night. The autosync feature also comes in handy if you are shooting several targets during the night since it can be used to assure the telescope moved to the requested position. Depending on your mount and the separation of the targets, you may have to call the sequence only infrequently to keep the targets reasonably centered. 8.2.4
Ability To Image Multiple Targets
This is not a requirement but a “nice to have” feature. If your scope can track reasonably well but has problems doing accurate slews, it’s probably best to stay on one target. However, this can be a considerable waste of observing time. If you take an image and then wait two minutes before the next, that’s two minutes the shutter could be open gathering photons from another target. Even if you account
102 Imaging and Photometry Software
for the time to move to another target, you could be making better use of your equipment. It’s usually not hard at all to find two targets within a few degrees of each other. If your equipment and software allow, get the most photons you can each time you have good observing conditions.
8.3
Photometry Software
There are several excellent and affordable photometry programs available today. Most of the programs do much more than photometry, so you’ll get more for your money than just date/magnitude pairs. Here is a partial list of what’s available; my apologies to those not included. You’ll find web site listings in the appendices. Photometry Software MPO Canopus (Bdw Publishing) AIP4Win (Willmann-Bell) MaxIm DL (Diffraction Limited) Mira (Axiom Research) IRAF (IRAF programming group - UNIX based) AstroArt (MSB Software) Once again, I’ll offer my thoughts about what you should consider when looking at photometry software. Up front, I don’t consider exotic image-processing important to the goal at hand – good photometry for lightcurve work. Of course, I do realize that you may want to do other work and want more from your software. On the other hand, do not make the mistake of trying to do good photometry on images where you’ve changed the data so that it no longer represents reality, i.e., where you’ve done any non-linear alteration of the data such as log scaling, etc. Bias frames, dark frames, and flat-fields are fine since they remove noise and artifacts so that what’s left is “pure data”. Stacking several images is permissible, too, providing you, or the software, take the necessary steps to compute the actual effective time of the exposure – it’s not necessarily the middle of the range from the start of the first exposure to the end of the last. 8.3.1
Ease of Use
As with scope and camera control software, I consider this the most important point. If the learning curve is steep and long, then the software may not be for you. I will also repeat that you should take advantage of the user groups for a product to see if getting to the level part of the learning curve can’t done a little faster and/or with less pain. There’s no need to go it alone these days.
Imaging and Photometry Software - Photometry 103
8.3.2
Accurate Photometry
There’s no doubt here. If the program can’t give you accurate and, just as important, consistent, results, then you shouldn’t be using it. How do you determine the consistency? For one, you can start with an image of a well-determined standard field. I like M67 in Cancer since there is a good range of colors and magnitudes. Be sure to use a filter — V is the first choice — to help eliminate color dependencies of your system. Use the software to determine the instrumental magnitude of as many stars as possible. When you do the test, try to use the smallest possible aperture that still includes all of the brightest star you’ll measure. If your software displays the fullwidth half-maximum (FWHM) value for stars, use an aperture that has a diameter of 2–3 * FWHM. You’ll see recommendations for 4–5 * FWHM. My experience shows this may be a little too large, particularly if you have good seeing and tracking. Regardless, be sure that the selected aperture includes all of the obviously visible disc of moderately bright stars. The smaller aperture minimizes errors that are introduced as you measure fainter stars and the sky background starts to dominate the data within the aperture. Once you’ve determined and recorded the instrumental magnitudes, plot each one against the catalog magnitude for the given star. An example using data from MPO Canopus is shown in Fig. 8.1.
Figure 8.1 A photometry linearity test. The plot shows the catalog V magnitude (X-axis) against the measured instrumental magnitude (Y-axis) of a number of stars in M67 using a V filter and measured in MPO Canopus. This shows the program is able to measure stars of varying brightness in direct, linear relationship to their catalog magnitudes.
104 Imaging and Photometry Software - Photometry
You should also use a spreadsheet or some other program to do a linear regression analysis of the data using the instrumental magnitudes for the Y values and the catalog magnitudes for the X values. The first thing to look for is a high correlation in the data. Perfect would be an absolute value of 1.000. For a given aperture and sky annulus size, you may see the data start to scatter as you measure fainter stars. If your system is a perfect match to the band in which you make the measurements, the slope of the line (assuming magnitude vs. magnitude or log-log plot) should also be exactly 1.000. If your system isn’t a perfect match, then the slope may be slightly different. That is why you check the correlation. It’s possible that using a smaller aperture would help reduce the scatter for faint stars, i.e., the correlation would be closer to 1.000. However, keep in mind that you should use the same size aperture for all measurements. Bill Romanishin’s book covers the technique of using different-sized apertures and scaling the measurements so that they are based on a common size. I haven’t tried that technique and can’t say how difficult it may or may not be. My philosophy is to try to keep things as simple and straightforward as possible. See page 41 for more about selecting the right aperture size for a given image. 8.3.3
Supported Catalogs
If you’re reducing raw instrumental magnitudes to those on a standard system, the program needs to support reading data from a catalog of standard stars. Of course, one could just get that data independently and plug it all into a spreadsheet but why should you have to do that if there’s a program that is self-contained? The commonly used catalog of standard stars is the one by Landolt. These stars are the de facto standard used to determine the transforms for systems. If you’re careful, the sequences produced by Arne Henden can be also used to get very close to, but still not officially on, the standard system. It’s important when using the Henden fields to use only those stars with magnitude errors <0.02 m and for which there are at least three observations. Does the photometry software support using one or both of these catalogs? Can you enter or import data from other catalogs? 8.3.4
Combining Data From Multiple Sessions
You can’t get all the data you need to solve a lightcurve to a high degree of precision in one run, not unless that run is maybe eight hours and the period is only 30 minutes. Usually it takes getting observations over several nights so that you have data covering the complete cycle and for which there are no ambiguous periods solutions (aliases). To analyze data from several runs, you must be able to combine the data into a single set. There are many ways to do this, especially if you use a spreadsheet but to do so requires accounting for several factors as the data sets are merged:
Imaging and Photometry Software - Photometry 105
1. Different comparison stars when working asteroids. 2. Average asteroid magnitude changing due to geometry. 3. Opposition effect for asteroids. 4. Different offsets when merging data from more sessions and/or observers. The last point is very important when you are working with other observers. Even if everyone converts to standard magnitudes there are systematic differences of on the order of a few hundredths of a magnitude. These should be taken into account during data analysis. It’s a definite plus if the software you use can also account for these offsets, whether working with just your data or that of others. 8.3.5
Reductions to Standard Magnitudes
You can use a spreadsheet to handle the reductions, but it’s very handy to have a program that measures the images for period analysis and for finding the necessary values to put your observations on a standard system. It’s even better if the program then applies those values to your observations to generate data on a standard system. 8.3.6
Period Analysis
The main point of getting data for a lightcurve is to find the period and amplitude of the curve. Again, it’s easy to export the data from a program to a file that can be read and used by a spreadsheet. Using the folded-data approach, one can find the period fairly easily. Do be careful about using the Fourier analysis in some spreadsheets. It lacks some of the sophistication required for handling some of the more tricky asteroid lightcurves, such as being able to select the number of orders or showing the uncertainty parameters associated with a given mean fit of the data. The period analysis in MPO Canopus is based on the Fourier analysis method developed by Alan Harris. While originally written for asteroid lightcurve work, it works just as well with variable stars. A plot of the period spectrum is included to help you determine which of several possible solutions is the most likely. Another very nice program that’s worth exploring is Peranso (Period Analysis Software) by Tony Vanmunster (http://www.peranso.com). This software is being widely used by the AAVSO for variable star period analysis and appears to be a very sophisticated tool. A handy feature, not required, is if the program allows you to play “what-if” with the period and then replot the data without having to read all the data again. This can be used to help confirm to what precision you should state the period.
106 Imaging and Photometry Software - Photometry
8.3.7
Data Exchange Capabilities
Not everyone uses the same software. The software you use should be able to write data to a format that other programs can use and, conversely, be able to read data from other programs. At the very least, the format would be the Julian date (JD) and a magnitude, differential or absolute, in a simple text file with one observation per line. 8.3.8
Plotting Capabilities
Unless you’re using the ubiquitous spreadsheet, the photometry software should include at least some basic plotting, not only for the lightcurve, but also for the linear regression solutions when determining photometric transforms and extinctions. Never trust a computer! Always review the data and see if the results make sense.
8.4
Conforming to Accepted Standards
Asteroid and variable star people have their own standards about plotting and reporting data. See if the photometry software can address these. 8.4.1
Heliocentric JD or Light-Time Correction
Besides putting all data for analysis on the same zero-point, the times of the observations need to be put on a common “clock.” This is critical in period analysis since the corrections, or lack of them, can affect the results dramatically. Heliocentric JD is used for variable star work and is defined as the time the light from a star reaches the Sun, not the Earth. This eliminates the time difference of up to 16.6 minutes caused by observing the star at one point in the Earth’s orbit and then observing the same star six months later when the Earth is 2 AU farther from or closer to the star. HJD is critical when working short period stars since a difference of a few seconds quickly adds up. It may not be so critical when working long-period variables. When doing period analysis, you must use HJD. On the other hand, when reporting data to the AAVSO, they usually request that you provide the uncorrected JD. This way, they apply the necessary corrections and there is no doubt about how the correction was applied and the magnitude of the value. Light-time correction is computed for the time that the observed light left the asteroid instead of when it reached Earth. The correction is about 0.00572 d / astronomical unit. This eliminates errors caused by changing distances between the Earth and the asteroid that have the net effect of a time-delay (or advance) when observing a particular point in the lightcurve. If your software does period analysis, it should allow for the appropriate correction to be applied to the raw date. It should also allow saving the uncorrected
Imaging and Photometry Software - Photometry 107
date to a file for those cases where the group or person coordinating observations wants to apply the corrections in order to assure consistency and avoid confusion. 8.4.2
0% Phase
A phased plot is one where all the data is placed in the range of 0% to 100% (or 0.0 to 1.0) of the period of the lightcurve. The value is determined by taking the elapsed time from a fixed Julian Date and dividing it by the period. The fractional part of the result is the phase. For example, Phase = (5210.0 – 5200.0) / 100
= 0.1 or 10%
In this case, the fixed Julian date was 5200.0 (actually 2455200.0), the JD of the data point is 5210.0, and the period is 100 days. When variable star observers plot data for eclipsing variables, they like to put the primary minimum (the deepest minimum) at 0%. Long-period variables are usually plotted with a maximum at 0%. Sometimes you’ll see an asteroid plot where a maximum is placed at 0% phase. The choice of which is the primary maximum is mostly arbitrary but is usually the brightest one. It sometimes helps when analyzing lightcurve periods of asteroids to put an extreme point (minimum or maximum) at 0% phase – and sometimes it makes it harder. Asteroids are troublemakers sometimes. Check whether your software, if it plots data, can accommodate the various preferences for plotting. 8.4.3
Zero-Point Magnitudes
Any plot of lightcurve data should be set up so that brighter magnitudes are at the top. Because the magnitude scale is “upside down,” this means that smaller values are at the top, regardless if plotting absolute or differential magnitudes. 1.
All differential values should be computed as Target–Comparison where Comparison can be a single star or the average of several stars.
2.
For asteroids, the general rule is to have the vertical center of the plot be the average of the maximum (most positive value) minus the minimum (most negative value). For example, if the minimum value is –0.5 (the target was brighter by 0.5m) and the maximum was 1.2 (the target was fainter by 1.2 m) then MidPoint = (1.2 – (–0.5)) / 2 = 0.85
108 Imaging and Photometry Software - Photometry
Figure 8.2 The two common methods of plotting differential magnitudes. On the left, the magnitudes are centered about a mean differential value, thus there are negative and positive values. This method is often used when reporting asteroid lightcurves. On the right, the magnitudes are all positive since they are derived from the minimum differential value being set to 0. This is the more common use in variable star reports.
3.
8.4.4
For variable stars, the usual preference is to put 0.0 at the top and make all differential magnitudes a positive value. This means the plotting routine must adjust all values so that data point where the star is brightest is 0.0 and all others fall below that. Fig. 8.2 shows the difference between two plots. Reporting Errors
The photometry software you use should report the errors for the data. This feature is almost mandatory now. In the case of a magnitude, an estimate of the error in the value should be given in magnitudes as well. If this is not included in the documentation, then the vendor should provide you with information about how the errors are calculated. In many cases, a simple 1/SNR is given. This is not a complete picture by any means, but at least it’s a beginning. 8.4.5
Time of Minimum (TOM) and Ephemeris Calculator
This is certainly not a requirement, but variable star workers would defintely find it useful. There are several accepted methods for finding TOM. If the program includes one, it should say which one and also state an error. Handier yet is if the program predicts future (or past) times of minimums based on the derived period.
Imaging and Photometry Software - Photometry 109
8.5
Manual Versus Automated Measuring
The discussion here is not whether it’s best to automate the process of taking images but whether to automate the process of measuring images. There is a great tendency for those getting into lightcurve work to assume that automation is the way to go. See if you feel the same after you have to go back to figure out why there are seven data points that just don’t seem to make sense and you’re not quite sure which of the 200 images that were measured by the software produced those seven data points. I love automation – where it makes sense. I’m not an advocate of automation when it comes to measuring images. Let’s take a look at the advantages and disadvantages of both methods and then you can decide for yourself. 8.5.1
Full Automation
The prime selling point for full automation is that you can let the computer do all the work while you do other things. If you’re producing a large number of images, then this approach has some merit. Another argument that can be made is that it avoids mistakes. True, the program doesn’t get sleepy, or distracted, or try to answer the phone while measuring images. Both of these are good selling points, but take a look at the other side. An automated system, unless it is very sophisticated with its detection and measuring process, can’t always make the right critical decisions. There are any number of things that could interfere with getting a good measurement from a particular image. Those usually result in “bad” data points. I’ve been down the road of having to go back and determine what caused a large set of seemingly bad data points. By that time that’s done, the benefit of automated measuring is often lost. Just to be clear, this is in the context of lightcurve photometry and does not apply to searching for new variable stars or asteroids. If I had to search several hundred trios of images each day looking for asteroids barely above the noise, I wouldn’t last long. Bless those programmers who have automated that process. 8.5.2
“Hands-On”
“Hands-on” doesn’t mean you literally measure each comparison star and target on every image. Instead, you can get a program to display each image with the measuring apertures around the comparisons and target or at least where it believes those objects are. If you like the positioning of the apertures, click a button or press a space bar, the measurements are recorded, and the next image is loaded automatically. The process repeats until the last image is measured. If the program has the built in intelligence, it should track the motion of an asteroid. This approach allows monitoring for obvious bad images, e.g., cosmic ray hits, meteors, and Santa Claus’ sleigh. A well-designed program or system can
110 Imaging and Photometry Software - Photometry
measure images almost as fast as the automated approach, if not faster. Using MPO Canopus, it’s not uncommon for me to measure 150 images and find an initial period in less than 10 minutes. As software becomes more sophisticated, the disadvantages of full automation may go away. I for one probably would not go that route. I enjoy looking at images, even if it is the same field 150 times. Besides, if I turned everything over to the computer, it might appear that I have some free time and so be assigned some household chores.
Chapter 9
Collecting Photons Before you fire up the telescope, camera, and computers to take images, you need a plan. You need to decide what to shoot, how you’re going to go about it, and consider how others might use your data. Chapter 2 covered some of the types of targets you might shoot. Now you need to pick one or two specific objects and start getting images.
9.1
The First Step – Getting the Right Time
I’ll say it again: never trust a computer. If you rely on your PC clock for accurate time, you’ll soon be in trouble. Those doing astrometry on fast-moving objects have learned this lesson all too well. PC clocks are at the mercy of many things, most of all the operating system. I don’t work much with Linux or UNIX machines, so they may not interfere with the hardware clock on a computer, but we’re all familiar with how Windows can steal cycles away from the CPU and never give them back. Over a day a computer clock can drift by seconds, if not minutes. In all astronomical work, you should make sure that you have the best possible time being recorded into your data. Whatever you do, do not overlook this important part of your data. Being only a second or two off can have dramatic effects on final results. There’s nothing more humbling than submitting some astrometry positions to the Minor Planet Center and getting a note saying all those observations are bad because of timing errors. 9.1.1
Internet Time
It’s not hard to find a free or shareware program off the Internet that automatically updates your computer clock by going out to one of the timeservers available off the Internet and getting the correct time. These programs can even adjust the time that is received by the small but finite time it took for the request to go to the timeserver and back to your computer. I have one of these running on my computer and it’s worked quite well. 9.1.2
“Atomic Clock”
For some time, there were “atomic clocks” available that received the time signal from WWV in Fort Collins, CO and could be attached to the computer via an RS111
112 Collecting Photons – From Image to Data
232 port. You could set the clock to update the system clock as often as once a minute. I use this method for one of my other computers and cross-check with another one using Internet time to see if they stay in step. Unfortunately, these clocks no longer seem to be available – at least those with the RS-232 interface – but you may find one on eBay or some other web site. 9.1.3
UT, No Daylight Saving Time
A discussion on a mailing list about issues with Daylight Saving Time prompted a professional to email me a message in which he was nearly astounded to hear about the problems. To him, any computer that is used for acquiring astronomical images should be on UT and not use DST at all. I completely agree. I know you may use your computer for other things but it’s not that hard to do the mental math to figure out what time it is from the computer clock (or look at a wall clock). If you stick to UTNDST there is no worry about time zones or daylight saving time. By the way, any imaging software should write only UT for time values in the header. If it doesn’t, pressure the authors to include that option as the default, if not make it mandatory.
9.2
Planning the Observing Program
Sometimes planning the observing program is nothing more than knowing you want to work an asteroid for which there is no known lightcurve and observing it until you have a curve and know its period and amplitude. If you’re hoping to help with shape and pole determinations, you’ll need to coordinate your efforts with others so that the data gets into the right hands in the most efficient way. Collaborations are very helpful when the target has a long period or one with a multiple very near the interval between observing sessions. If you’re working variable stars, you almost certainly are going to be working with others at some point. There are often campaigns in progress to follow a cataclysmic variable that’s in outburst or wanting constant monitoring. The same can be said for an eclipsing binary that is going through an eclipse that occurs at rare intervals. There’s more to planning than just picking your target. 9.2.1
Data Mining
No, you don’t need a pickaxe. The concept is to get as much out of each image as possible. If you’re working an asteroid or variable star for its lightcurve, there are likely many other stars in the field. Have you considered working with any of them? It’s very possible that one or more of the field stars is a variable that has yet to be discovered, studied, and classified. You just hope it’s not one of those you choose for your comparisons.
Collecting Photons – From Image to Data 113
Figure 9.1 Finding a new variable. This screen shot from MPO Canopus shows the sampled lightcurve found during a search for new variable stars.
Some software packages allow you to check those additional stars to see if there is something else to be learned from your images. I have about 50 CDs and DVDs filled with images taken for asteroid lightcurve work. With the urging of Bob Koff of, Antelope Hills Observatory in Bennett, CO, I developed a routine in MPO Canopus that searches images for variable stars. Fig. 9.1 shows a screen shot following a search of asteroid lightcurve images. The plot is for a star that turned out to be a previously undiscovered eclipsing binary star. Always keep in mind that your images may hold much more information than you plan or expect.
9.3
Selecting Targets
There are hundreds of thousands of asteroids and who knows how many variable stars out there. Which of these are good targets for your observing program? The answer is another question: What are your goals? You need to define those to at least some degree and then start the process of selecting a target. One of the things you should do is visit newsgroups or web sites to see what you might do to provide the most immediate help. Some group may need some timely assistance with a gamma ray burster, a cataclysmic star, or near-earth asteroid. Those in for the long run usually define one or two general goals and then peck away a little bit at a time. This allows for occasional excursions from one area into another and so keeps one’s interest going. I found the diversions to eclipsing binaries very helpful for my asteroid lightcurve work. Asteroids can be very reluctant in giving up their secrets, and that is sometimes frustrating. A couple of nights on a shorter-period eclipsing binary with its regular, symmetrical curve refreshes the spirit.
114 Collecting Photons – From Image to Data
9.4
General Considerations
Regardless of your target, there are some issues common to any selection process. 9.4.1
Magnitude and Motion of the Object
There’s no point trying to work an 18th magnitude target with your 0.2 m scope. You may be able to expose long enough for astrometry but down to 0.01–0.02 m precision is going to be very difficult, if even possible. Table 9.1 presents data from Arne Henden (MPAPW 1999 meeting) that serves as a guide. Telescope (m) 0.2 0.4 1.0 2.0
Limiting Mag 13.5 15.0 17.0 18.5
Table 9.1 Limiting magnitude vs. telescope size, V filter, and 2-min exposure.
The data in Table 9.1 assumes good optics, the use of a V filter, and a 2-minute exposure. The 2-minute exposure is a compromise for main-belt asteroids so that they don’t trail too much. If the asteroid is moving slower than average, e.g., it’s near the stationary point or beyond the main belt, you can use a longer exposure. If the object is zipping across the sky, the exposures must be shorter. Of course, movement is of no concern to those working variable stars. 9.4.2
Dark Time
By this I mean the time that you’ll be able to work the target on a given night. If it’s considerably shorter than the known or estimated period of the target, then you’ll get only a partial curve at best. If you’re trying to fill in gaps from previous runs, then it’s an acceptable situation. However, if you don’t know the period at all, you want the longest possible run. Now we’re in a Catch-22.
Figure 9.2 Asteroid lightcurve numbers. The plot shows the number of asteroids with a given rotation rate in hours. The majority of periods fall in the range of 4 to 10 hours.
Collecting Photons – From Image to Data 115
A large majority of asteroids have periods less than 12 hours, with a significant portion of those in the range of 4–8 hours. Fig. 9.2 shows a chart of the number of asteroids versus the period in hours. Use that as your first best guess. For variable stars, you are probably working an already known star and so can figure more accurately how long you can follow the star on a given night. Keep in mind that the target should be at least 30° above the horizon at all times. This rule can shorten a useful run considerably. In one case I was working an asteroid at about +7° declination. I’m at +39° latitude, meaning the star never gets more than about 60° above the horizon. The combination of short summer nights and the short time the asteroid was high enough allowed only 4–5 hours a night. Fortunately, the period was about 6 hours and so each run covered a large portion of the curve. In short, use the dark time to determine which targets are better suited to getting useful data. Of course, in some cases the circumstances dictate that you get what you can when you can. If the asteroid is at 14th magnitude for the first time in 25 years and won’t be that bright again until Jean Luc Picard is warping about the galaxy, you should definitely concentrate your efforts on that asteroid in lieu of others. 9.4.3
Avoiding Aliases
I’ll cover aliases in the analysis section. For now, what you’re trying to avoid is getting just enough data from a single run that when combined with a run several days later leads to a number of valid solutions (aliases). Petr Pravec of Ondrejov Observatory, Czech Republic, recommends that if you’re starting a new target you should wait until you’ll be able to get two consecutive nights. Then, if you get data several days removed from the existing sets, you have a better chance of finding the true period. The weather is not often accommodating, so do the best you can to follow this rule and remember that the two consecutive nights do not have to be the first two nights. It helps to get a handle on things early on, but it’s better to collect photons when you can and deal with the consequences later.
9.5
Asteroids
At the beginning of the book there is a list that gives a number of reasons you should do asteroid lightcurve work (see page. 7). Review that list and see if any one of those gives you an incentive to develop a specific observing goal. Once you’ve made your decision, you need to find one or two that fit the bill and start to work.
116 Collecting Photons – From Image to Data
9.5.1
The Collaborative Asteroid Lightcurve Link (CALL)
You can find the CALL site at http://www.MinorPlanetObserver.com/ astlc/default.htm. It has several services that are helpful for selecting targets. 1. It provides links to files that contain recent lightcurve results. This list lets you know for which asteroids there are known lightcurve periods and amplitudes and the quality of the results. Periodic updates are posted to keep the list current as more lightcurve results become available. The rating system uses values of 1 to 4. 1
The result is very insecure and could be entirely wrong. It is usually based on only a small fragment of the total lightcurve.
2
The result is mostly secure but could still be in error by 30%. The data usually covers the majority of the curve but there can be gaps and/or insufficient overlap at critical points, i.e., the maximum and minimum points.
3
The result is reasonably secure and based on data that covers the complete curve and has sufficient overlap to assure that any alias periods can be eliminated.
4
A pole position has been reported.
There are several caveats to this system. First is that a 4 rating means only that a pole position has been reported. It does not necessarily mean a secure period has been found. In one instance, the asteroid had a rating of 4 but the data justified only a 2 for the period. Eventually the posted period was proved to be wrong. Another assumption that’s often made is that a 2 rating means the asteroid is not worth observing. Remember that a 2 rating means the period is still questionable. If your goal is to determine lightcurves for which there is no previously reported results, you should tend to avoid those with a 2 rating and concentrate on those not on the list or with a 1 rating. 2. The CALL site includes a “reservation system” that allows you to let others know that you are working a given asteroid or that you need help getting a solution. 3. The site is where many amateurs post their results pending or after publication in a formal journal such as the Minor Planet Bulletin. Just because someone has posted a period doesn’t mean that additional work might not be useful. Of primary importance when posting results is to be honest about the quality rating of your results. Also keep in mind that when you post your results they fall into public domain. 4. The site has a search utility that allows you to search for an asteroid in the reservations or results data tables by number, name, or both.
Collecting Photons – From Image to Data 117
5. There are links to other popular web sites featuring lightcurve results or observing programs. You should visit these to help assure that the asteroid you’ve selected hasn’t already been worked. If your goal is to work only those asteroids with no published data, you may have a hard time confirming that. You can visit the Astrophysical Data System (ADS) web site and do a search for the asteroid. Its URL is http://adswww.harvard.edu/ That may turn up something you didn’t find elsewhere. Still, unless something has been published and put into the ADS database, you’re not going to find it. At some point, you have to concede that you may not be the first but you can certainly be the latest and try to be the best.
9.6
Variable Stars
There are too many possibilities here to just start shooting. I strongly recommend that you get on the newsgroups and web sites of the variable star groups in your area and country. For those in the United States, and for many others around the world, the first place to go is the American Association of Variable Star Observers (AAVSO). The URL is http://www.aavso.org This group has been around for nearly a century and its large pool of volunteer members has contributed a vast number of observations. There are parallel groups in many other countries, including the United Kingdom, France, Italy, Japan, Australia, and New Zealand, to name a few. I’ve included some URLs in the appendices for these groups. Within the AAVSO there are several groups dedicated to specific types of variables, such as eclipsing binaries, cataclysmic variables, long-period variables, and others. It’s not hard to find a person within the group or to use the organization’s web site to get in touch with a group’s team leaders. They will help you establish your observing program by telling you which stars need work, the techniques and preferred procedures, and how to report your observations. There is one other group that I want to mention specifically: the Center for Backyard Astrophysics (CBA). The URL is http://cba.phys.columbia.edu This group was started by David Skillman in the 1970s and was originally called the Center for Basement Astrophysics. The group has grown over the years and is now under the leadership of Joe Patterson. It includes a number of observers around the world. This means that with a little coordination, the group can follow
118 Collecting Photons – From Image to Data
a set of targets almost 24 hours a day. This becomes critical when studying stars with periods on the order of 0.3 to 2.0 days. It’s those darned aliases again! Naturally, the group has evolved to using CCD cameras on relatively modest scopes. The typical station has a 0.25 m telescope and a common CCD camera such as an ST-7 or ST-8. While they do study all types of variables, the primary targets are cataclysmic variables, partly because they are active objects and each outburst keeps the email flowing, which is “good for observer morale,” to quote the website. If you can automate your system and can measure a number of images from each night’s run in short order, which is pretty easy with today’s software, then this might be a program for you.
9.7
The Observing Run
I know you’re anxious to get to the telescope, but it really is worth the effort to understand what you’re going to be doing before you do it. This way if things don’t go perfectly, e.g., the equipment malfunctions, you can concentrate on those issues and not worry about shooting the right target at the right time. In fact, if you’re well organized and the equipment is cooperating, you can go read a book, watch TV, or even talk to your family; build up that pool of Spousal Permission Units any way you can! 9.7.1
Getting Flats and Darks
Most professional photometrists recommend that you get new flats each night. Something may have changed since you made your last set such as some new dust donuts, a shift in the equipment that affects vignetting, and so on. If you have the routine down, these don’t take too long and can be done before dark settles it and it’s time to start observing. Many choose to build libraries of flats. For the most part, this is acceptable but keep in mind that things can and do change. If you’re looking for the most accuracy, try to get a fresh set of flats – if not every night then frequently. There seems to be a difference of opinion about darks in this regard. There are many experienced CCD imagers who use cloudy nights to build libraries of darks. This is where they get a number of raw darks at various combinations of binning, temperature, and exposure length so that they can create master darks to be used later on. If you’re using scaled darks, meaning you need to get bias frames as well, you may want to get fresh darks with every run. 9.7.2
Merge Later
Some observers get a single dark and merge it (or a library dark) with the raw images at the time the image is saved. They may also merge in a library flat as the images are saved to disc. Don’t do this. You have less control over what happens
Collecting Photons – From Image to Data 119
and you’re not really saving any time. Get your flats and darks for the run and merge them (or your carefully made library darks and flats) with the raw images after the run but before measuring the images. 9.7.3
Transform and Extinction Images
If the night is starting off clear and your window for shooting the target is not overly restricted, get the images of the standard fields for the modified Hardie method right off. That way they are done and you’re more confident of the conditions under which they were shot. The same considerations mentioned about when to get darks and flats applies here, too: If the field is already high up, you have a limited time to get images; if it looks like conditions will hold, and there is plenty of dark time after the run, then you can wait until after the run to get the transformation images. 9.7.4
Exposures: How Long and How Often
This has been covered previously in several places in other contexts, so I’ll give a brief overview here. Your goal is to get images where the comparisons and target have a high enough signal-to-noise ratio (SNR) to get the precision you need. In addition, you want to avoid excessive trailing, to get as many images as conditions allow, and to take images at a rate such that you work the target effectively. Many of these conditions are interactive: changing one affects one or more others. If you’re looking for 0.01 m precision, you need at least a 100 SNR on all objects, not just the target. For an average scope of 0.25 m and a 14th magnitude object, that means exposures of about 2 minutes. In general, you should keep exposures at 1%–2% of the period. However, that’s not an unbreakable rule. A 5minute exposure on a target with a 2-hour period is about 4%; but if that’s what it takes, that’s what you should do. Eventually, there is a limit; if you’re working an asteroid with a suspected period of 12 minutes and using two-minute exposures, or 16% of the period, you’ll have a hard time getting a lightcurve that can be correctly analyzed. There is nothing gained by having your scope sitting idle and so waiting a minute or two between exposures is letting photons go unused. If the target is on the faint side, go ahead and shoot almost as fast as the system allows an image to be taken and downloaded. If your camera can take the extra workload, the additional images let you “beat down the noise.” Usually, I don’t change the exposure or intervals for a given target after the first night. This is to maintain consistency whenever possible. If you’re making lots of changes, it’s hard to keep things straight. Of course, if circumstances dictate, adjust the exposure time and interval as needed. Rules are meant to be broken. Experience is your best guide.
120 Collecting Photons – From Image to Data
9.7.5
Keeping Up With the Asteroids
Variable star observers can skip this section since their targets don’t move. Those who are following asteroids, or are simply curious, should read on. Asteroids are usually moving, and so you should consider how fast it’s moving and the field of view (FOV) of your camera. If you can keep the asteroid and the same comparisons in the field for the entire run, then you can aim the scope at the asteroid, or maybe a position where it will be halfway through the run, and go about your business. If the asteroid is going to cross the field or, if by keeping up with it, you lose the original comparison stars, then you have to decide what to do. Staying on Target If you decide to update the scope’s position periodically, e.g., every 30 to 60 minutes, then at some point the comparison stars might get too close to the edge or go off the frame entirely. As long as there are stars to replace them, you can continue on. This will be a time when putting the observations on a standard or at least internal system proves beneficial. Another reason to make frequent calls to a GotoAsteroid-type command is when you’re using a German Equatorial Mounts (GEM). The controlling software or built-in firmware can then tell the scope to perform a “meridian flip.” This is when the target moves far enough west that if the scope continues in its current position, the tube or part of the mount would hit the pier. At that point, the scope is sent to the other side of the pier so that it can continue tracking. Keeping a Point of View You may prefer to keep a specific field and, when the asteroid moves too close to the edge, switch to an entirely different field on which you can sit for another period of time. This isn’t as easy to put into a script since you have to know how often you need to change the field. You still have the issue of having subsets of data that use different comparison stars.
9.8
Measuring Images
In simple terms, the goal for measuring images is to have a set of date/magnitude pairs. Using these, you can plot the lightcurve of the asteroid or variable star. If that’s all you want to do (it’s not a bad idea for a first project), then you can stop. However, most people are interested in finding the two most basic elements of the curve, the period and the amplitude. Finding those comes under analysis and will be covered in that section. As I mentioned at the beginning, I’ll be using MPO Canopus and PhotoRed for a majority of the demonstrations in this and the analysis section. I will give some “how to’s” for those using other programs and will do my best to avoid being so specific as to require that you be using MPO software over any other.
Collecting Photons – From Image to Data 121
9.8.1
Merge then Measure
Assuming you heeded the earlier advice of not adding flats and/or darks as you took the images, you should have a series of raw images, with no flats or darks applied. Along with these you should have the raw flats and darks to be used to create the master flats and darks for final processing (or a set of library darks and flats). I’ll go over the basics of creating master darks and flats but leave the fine details to such books as Astronomical Image Processing by Berry and Burnell, which should be on your shelf right next to the latest version of the Henden and Kaitchuck book on photometry. One topic I won’t cover is the creation and use of bias frames. Unless you’re using scaled darks, these are not usually necessary. They are useful for analyzing your camera’s performance, however, and worth learning more about. Again, the AIP book is an excellent reference. 9.8.2
Creating Master Darks
To create a master dark, you need a series of raw darks. These are images taken with the shutter closed and exposed for the same time and at the same temperature as your raw target images. They must also use the same binning. With basic processing, you cannot merge a 1X1 dark with a 2X2 raw image. You should have at least 4 raw images, with common recommendations being 9 and 16. The more you have, within reason, the higher the quality of your master dark. Use your favorite image-processing program to merge the raw images into a single master dark. You can use an averaging or median combine function. The averaging function finds the average value among the images for each pixel. Median combine sorts the values for a pixel in numerical order and uses the value where half the values are less and half the values are more than the final value. The median combine method is good for eliminating pixels that had abnormally low or high values, e.g., a cosmic ray hit the chip during an exposure. However, it produces a master that is a bit noisier overall than the averaging method. The averaging method has the disadvantage of generating a pixel value that is skewed by one or more of those abnormal pixel values. 9.8.3
Creating Master Flats
Flats are essential for precision photometry. The process for taking images for master flats was covered in detail earlier (see page 37). You must also take a series of dark images to create a master flat dark that is applied to the flats. These darks must meet the same requirements as any darks, i.e., the exposure must be the same as you used for the raw flats (not target images), at the same temperature, and the same binning. The master flat dark can be applied one of two ways, (1) apply the flat dark to each raw flat before creating the master flat or (2) create the flat first and then ap-
122 Collecting Photons – From Image to Data
ply the dark flat to create the final flat. The recommendation I’ve heard most is the second approach. Your image-processing software should support either method. Almost always, you’ll use a median combine to create the final master flat, though averaging is not out of the question. There are other functions that combine the benefits of averaging and median combine. Those are beyond the scope of this book. The final flat can be further processed so that it is normalized. This is where the pixel values in the flat are converted to values ranging from 0 to a little above 1. There are various approaches to this process but in general, a certain pixel value, e.g., 25,000, is considered to be 1. All other pixels have a value that is the ratio of their original value to the base value. 9.8.4
Process the Raw Target Images
Now you can create the final processed images. Subtract the master dark from each raw image. After those are all done, go back and apply the master flat not to the raw images but to the images that were created as a result of subtracting the dark. Some software packages allow you to do this in one pass and so save a little effort on your part. 9.8.5
Stacked Images
Those doing asteroid astrometry of faint targets often take a series of images and then stack them to raise the signal-to-noise value of the asteroid. This process requires not just adding pixel values but first registering each of the images so that any given star has the same X/Y position on all the intermediate images used to create the final stacked image. In some cases, the stacking can be done to account for the asteroid’s motion. This improves its SNR even more since the asteroid is no longer trailed. The stars are trailed, but, since they are usually brighter, a little trailing does not affect their SNR and subsequent position measurement as much. Very important when stacking is assigning a date/time of exposure to the final image. If all the exposures are the same and each image was taken as quickly as possible after the previous one, then the effective date/time can be readily found. If the exposures were of different length or there were significantly different intervals between exposures, some weighting may be required to get a valid date/time for the final image. Reducing the noise in the data can be done without stacking by taking more than the usual number of images so that you have more data points and/or binning contiguous observations when analyzing the data. For example, you might take the first three observations and find the average date/time and magnitude and create a single observation. This can be useful for smoothing noisy data without having to go through the machinations of stacking images. This approach has its limits, e.g., it can’t get enough signal out of a 20th magnitude object to measure it in the first place.
Collecting Photons – From Image to Data 123
9.9
From Image to Data
In reality, you need to keep track of more than just the date/time and magnitude pairs for each run on your target. If nothing else, accurate records allow you or someone else to reconstruct your results. You may find at some point in the future that you made a mistake or that you didn’t and someone else did. Good record keeping is critical in science. There’s a more practical reason, too. You need to be able to match the differential magnitudes from each session to a common reference point. When working with asteroids and so most likely using a different set of comparison stars each night, this becomes mandatory. Even if you’re taking filtered images of a variable star field with known comparisons, you may still have some slight adjustments to make from one session to another. 9.9.1
The Concept of Sessions
Each person has his own way of getting a job done. What follows is my method for matching data from one or more runs, regardless of whether you are the only person generating the data or whether you are working with data from several observers. The method depends on managing what are called “sessions.” A session is a group of observations that use the same set of comparison stars, the same filter, and are taken during a single run. By this definition, you can’t put observations of a variable star from Tuesday and Thursday night into the same session. If you take images through different filters, you must create a separate session for each set of measurements using the same filter. A session is not always a complete night. In the case of a fast moving object and a small field of view, you might have a few dozen sessions in a given night for each filter. However you store the data, be it in a text file or one or more tables in a database, certain information should be stored for each session. It’s not how but what you store that’s important. 9.9.2
Recommended Session Data Fields
1. The Name of the Object. You must know for which object the observations were made. This may seem trivial, but I’ve seen files without this information. It’s difficult to check the validity of the data without knowing which object was being observed. 2. The Comparison Stars Used This is required so that you or someone else can check your data at a later date. You can record them by GSC number, X/Y position on a given image (be sure to record which image and to keep a copy of that image), or the RA/Dec of the stars. Whatever you use, it must uniquely identify the stars that were used for comparisons.
124 Collecting Photons – From Image to Data
3. The RA and Declination of the Target The RA and declination are used to compute the correction to Heliocentric Julian Date for variable star observations and the air mass for all targets. This could be considered optional since you know the name of the object and so can look up its position. Why create extra work when you are reviewing the data at a later time? Store it now. 4. The Distances From the Sun and Earth (Asteroid) These are used to compute the light-time correction for asteroid observations. These two could be considered optional since they can be computed easily enough. One reason to store them now would be to check whether the light-time correction was properly applied. 5. Which Filter Was Used Since you’re creating a separate session for each filter, this becomes a very important field. 6. Session Zero-Point Offset Even if you’re reducing your observations to a standard color and using the same comparisons every time, there can be a slight shift in the zero-point used to reference the differential magnitudes of the target. You should store the value you used as the reference for the given session so that another person can check if you applied it correctly when you merged data from different sessions. 9.9.3
The Observations
You can record any number of fields. I recommend that you store at least the following data for each observation. 1. The original date and time of the observation, not the date/time converted to Julian Date, UT, with or without light-time correction. You can get this directly from FITS or SBIG images where the data is stored in a header before the actual picture data. It’s important to note somewhere whether these times are the start, mid, or end of exposure times. If not the mid-time, which is what you use in analysis, then you must also record the exposure time so that the mid-time can be computed. If the exposures are all the same, you can keep the exposure time in the basic session data and not record it for every observation. 2. The instrumental magnitude for the target 3. The instrumental magnitude for each comparison star. 4. The air mass for each observation. This is used when computing and applying the extinction and transforms corrections.
Collecting Photons – From Image to Data 125
Additional data can be stored but these are the most critical. Two additional fields stored by MPO Canopus are the averaged magnitude for the comparisons and the differential magnitude, i.e., Target–ComparisonAverage. This allows direct review of the data for any obvious errors that may have been caused during the measuring process. 9.9.4
Miscellaneous
There are several other items worth saving. It does no harm to keep the information and can do a great bit of good should you need to reconstruct your results later. 1. The approximate date and time of the session. 2. The camera that was used. 3. The camera temperature. 4. The instrument (telescope) that was used. 5. The exposure time (if not stored with the observations). 6. If you referenced the comparisons by their positions on an image, the complete file name, including path, where the image was or can be found. 7. Any additional notes and comments about the conditions, equipment, imaging processing, etc., that might help someone using your data at a later time to give the data the proper weight when merging it with other data sets.
9.10
The Hands-On Approach for Measuring Images
The hands-on approach, as I mentioned before, requires that you be present and take part in the process. However, depending on the software, it doesn’t mean that you’ll have a lot of work to do. For this section, I’ll be using screen shots from MPO Canopus to illustrate some points that apply to measuring images regardless of the actual mechanics. For the automated approach, I’ll give a brief description using Software Bisque’s CCDSoft®. If you use different software, use what’s here as a general guide to the process. 9.10.1
The Session Data
In MPO Canopus, there is a special form used to create a session (see Fig. 9.3). I won’t cover the details of the form. However, you can see what information it stores and compare that to the list of recommended fields listed above.
126 Collecting Photons – From Image to Data
Figure 9.3 The Canopus sessions and Lightcurve wizard setup forms. The latter is used to set the positions of the comparison stars and target prior to measuring the images. Two images are measured, which allows the program to interpolate a moving target’s motion and correctly position the measuring aperture as each image is loaded.
There are a couple of things to note. One is that the user is able to use subsets of the originally selected comparison stars. This allows for the possibility that one or more of those stars is variable. Using a variable for a comparison generates all sorts of interesting, confusing, and invalid results. The other is the “Comparison Plots” tab. I’ll go into its use later, but it relates to determining whether one of the comparisons is variable. 9.10.2
The Lightcurve Wizard
Once the session data is entered, Canopus makes use of what’s called the Lightcurve wizard (see Fig. 9.4). This serves two purposes. The first is to select an “anchor” comparison star and tell the program the X/Y position offsets of the other comparisons and target relative to that anchor. The second is to determine the motion of the target if it is moving. This is accomplished by measuring the first and last image in the series that is to be measured. The program uses the date/time in the image headers to compute the total elapsed time along with the X/Y offsets of the target in the first and last image to compute the total motion and then rate of motion. As each image is called up, the program interpolates the asteroid’s position on that image and attempts to find the asteroid near that location.
Collecting Photons – From Image to Data 127
Figure 9.4 Measuring images in Canopus. The measuring apertures are automatically positioned by the program. If the user approves of the positioning, he clicks the “Accept” button to record the data and load the next image. A negative image can be displayed. This makes seeing faint field stars easier and so avoids measuring the asteroid when it is too close to a star. Much time has been lost trying to analyze a strange lightcurve only to it was due in part to a faint field star.
9.10.3
Measuring the Images
After the wizard is completed and the user selects the set of images to be measured, a list is displayed that allows the process to begin. The user double-clicks on the first image in the list. This causes that image to be loaded and the program tries to measure the comparisons and target. Being the first image, the apertures are nicely centered and one could accept the program’s positions. If the field is particularly crowded, such that you are getting all or part of a star or stars in the sky annulus, then it is a good idea to try a slightly larger or smaller annulus. The obvious choice might seem to be smaller so as to exclude as many of the field stars as possible. Whether or not that is true depends on the algorithm used by the software to determine the sky background. In simplest terms, if you use a smaller annulus that means the pixels contributed by the field star become a larger portion of the total number and flux of the pixels within the annulus. This may artificially raise the background level, which makes the star measure fainter than it really is. Check with the software documentation to see what it recommends. In very crowded fields, it may use an entirely different technique from aperture photometry. Fig. 9.5 shows why “hands-on” measuring is not a bad thing. The tracking was less than ideal for the image and the entire trail would not fit in the aperture.
128 Collecting Photons – From Image to Data
Figure 9.5 The case for human intervention. This badly trailed image was rejected. Some automation systems can handle this problem, but cosmic ray hits and passing planes may prove more difficult.
This brings up a very important point: do not change aperture or annulus sizes after you start measuring. The reason is that you no longer have the same number of pixels used to compute the sky background, which could cause a shift up or down in the measured magnitudes. Another result of changing sizes is that the ratio of the number of pixels in the measuring aperture and sky annulus may change. This can also affect the measured magnitudes. There are techniques that can allow changing sizes, but unless your software and/or you are prepared to go through that process, keep the same sizes. In this case, the image was rejected and the user moved on. An automated system may not be able to account for this problem and so would measure the image anyway. What’s the harm? None, really, except that you may have a strange data point in plot that you must decide is real or not. As the user clicks the Accept button, the date/time, air mass, and the magnitudes for the comparisons and target are automatically stored in a database table. Should the field have drifted between images so that the stars are no longer under the measuring apertures when an image is loaded, the user can click on the “anchor star.” This automatically repositions the measuring apertures and the user can then click the Accept button to record the measurements. 9.10.4
Plotting the Data
Once all the images are measured, a different part of the Canopus is used to plot the differential magnitudes that were just measured (see Fig. 9.6). Since this is an asteroid, the data is corrected for light-time and the plot is centered on the mean magnitude. For all plots in general, the target should get brighter as the data points approach the top of the plot.
Collecting Photons – From Image to Data 129
Figure 9.6 A raw lightcurve. The data is from a single session on an asteroid. The X-axis shows the fractional part of the Julian Date, while the Y-axis is the differential magnitude, computed from Target–Comparison. “Comparison” can be the value from one star or the average of several stars.
Figure 9.7 Plots from MPO Canopus showing a good comparison star (left) and a bad comparison (right). The plots show the instrumental magnitude versus time.
9.11
Checking the Comparison Stars
You should always plot the raw data of the comparison stars to make sure that none of them is a variable. Fig. 9.7 shows raw plots from MPO Canopus of two comparisons stars obtained during different sessions. The plot on the left is what you would expect following the target field from when it was low in the east and moved towards the meridian. As the star rises, the first-order extinction is less and so the star gets brighter. The plot on the right is anything but good for use in differential photometry! Needless to say, this star was rejected. However, it was later studied on its own and found to be an eclipsing binary with a period of 11.8 h. This is an example of why you should never use only one comparison star.
130 Collecting Photons – From Image to Data
Figure 9.8 A plot from MPO Canopus showing the differential magnitude between a given comparison and the average of the other comparisons. This provides a more certain check that the comparison is not variable.
The final check is to plot the difference between a given comparison and the average magnitude of the remaining comparisons. In short, do differential photometry on each of the comparison stars, making it the target, and using the others as the comparisons. Fig. 9.8 is such a plot. The data has no definite trend up or down and shows relatively little scatter. This is a good comparison star. In general, you should never use a comparison that is variable. In a small number of cases, I’ve measured images where two stars were variable but still used them but only because they both showed a steady trend in opposite directions and by small, equal amounts. I don’t recommend you try this, at least not until you have a lot of experience.
9.12
The Automated Approach to Measuring Images
Even if you’re using a program that fully automates the measuring process, you should glance over the previous section so that you have an idea of some of the concerns that arise when measuring images. This allows you to have a better understanding of the process and, if the software allows, adjust the measuring parameters to get the best results. I can’t cover all the programs that are available. With the kind permission of Software Bisque, here is a brief outline of how to proceed using CCDSoft®. I used version 5 for this along with version 5, Level IV, of TheSky®. The latter is required to take advantage of the automated lightcurve utilities in CCDSoft. The first step is to “pre-analyze” the images. This allows the software to insert data into the FITS header so that the image can be matched against the star catalogs in TheSky, as well as other information that allows for accurate astrometry. The screen shot in Fig. 9.9 shows the setup screen after a folder of images has been selected.
Collecting Photons – From Image to Data 131
Figure 9.9 The Preanalysis setup screen in CCDSoft. ©Software Bisque and Santa Barbara Imaging Group.
Once the images have been analyzed, the next step is to tell the program which stars are to be used for the comparison and check star. At the time of this writing, it was not possible to use additional stars, and, in fact, the differential photometry is performed using only the comparison star. In Fig. 9.10, the asteroid, the comparison, and check stars have been set. It’s a matter now of letting the program automatically open each image, locate the three items, measure the brightness, and compute the differential magnitude.
Figure 9.10 Setting the comparisons and target in CCDSoft just before measuring the images. The imaged has been reversed so that the star and target markers are easier to see. ©Software Bisque and Santa Barbara Imaging Group.
132 Collecting Photons – From Image to Data
Figure 9.11 The final lightcurve in CCDSoft. ©Software Bisque and Santa Barbara Imaging Group.
Fig. 9.11 shows the final lightcurve obtained in a matter of a minute or two. This is the same target that was used for the “hands-on” approach, so you can directly compare the curves for the first night. The thin line below the asteroid lightcurve is a plot of the check (K) minus the comparison star. The small variations and that the line is horizontal are good indications that the comparison and check stars were not variable. CCDSoft also creates a text file of the data that was measured. As with the text file from any program that outputs at least the date/magnitude pairs, the data could be imported into another program or a spreadsheet for period analysis.
Chapter 10
Analyzing the Data The plots of the data that you acquire are proof that you can measure the brightness of an asteroid or variable star with a reasonable degree of precision. They’re also what you need to take the next big step, which is analyze the data so that you can determine the period and amplitude of a lightcurve and, even more fascinating, create a likely model of a binary star system.
10.1
The Quality of Data
Before you begin working with the data, you should review it for any obvious problems standing in the way of your goal of finding the period and amplitude. I usually plot the raw data of the latest session before trying to analyze the data for a period or amplitude or merge it with other sessions. Fig. 10.1 shows why this is a good idea. It’s extremely unlikely that the one point at upper right is real. It could be the result of a cosmic ray hit, a passing plane, or any number of causes for an excessively hot pixel within the measuring aperture of the target. My method is to remove the data point from the plot and calculations but not from the original data set. In this case, there’s little doubt the point is not valid. In other cases, where the deviation from the rest of the data is not so extreme, the cause might be legitimate. For example, take a look at a couple of lightcurves I obtained for 2000 DP107.
Figure 10.1 One bad data point can spoil a good curve. 133
134 Analyzing the Data
Figure 10.2 Two lightcurves for the asteroid 2000 DP107. The data is a bit noisy, but still very usable and would prove very important.
Figure 10.3 A not so easy match. Is the data at the left “bad”?
The two plots in Fig. 10.2 show somewhat noisy data. There do seem to be some similar features to the curve and the Fourier analysis in MPO Canopus was able to find a period of about 2.78 h. Look at one more plot by itself. If you try using the old-fashioned method of comparing features in Fig. 10.3 to find a period, you’ll have a bit of trouble. There doesn’t seem to be an easy match available with this curve. If the data at the very left of the plot was higher, the plot might fit better with the right-hand plot in Fig. 10.2. Is the data at the left side of Fig. 10.3 bad? One more plot provides the answer. In Fig. 10.4 you can see the seemingly bad data when merged with all the other available data. Save for the points that seem too low at around phase 0.8, the slightly noisy data has revealed a relatively nice curve and allowed finding the period of rotation of the primary. That “bad data” was caused by the satellite of the primary asteroid. These observations, when combined with those from other observatories and some magic performed by Petr Pravec of Ondrejov Observatory, helped prove that 2000 DP107 is indeed a binary asteroid. Would you have “written off” that data and removed it so that you had a clean lightcurve?
Analyzing the Data 135
Figure 10.4 The lightcurve using all the data for 2000 DP107.
I’ve been asked how one can tell if data is bad. From the above, I hope you can understand when I reply, “There is no definite answer.” In the first case, I don’t think anyone would argue that that one straggler data point in Fig. 10.1 was out of place. It would be hard to imagine a physical trait of the asteroid that would allow half a magnitude jump in a matter of a couple of minutes – unless the big spotlight from the used flying saucer lot happened to be aimed right at you when you took the image. Another case in point is that sometimes you’ll get a lightcurve that is no curve at all; it’s mostly flat and has no distinct period or pattern. Is that bad data? Consider what might cause nearly flat data. First, the asteroid’s rotation axis may be pointed nearly right at Earth, i.e., we’re viewing the asteroid nearly pole on and so there are almost no changes in the features or lighting as it rotates. Second, the asteroid may have a period of several weeks or months, and all you’ve caught is the 0.02 m rise that occurs during a single day. If you continued to follow the asteroid for some time, you’d eventually get a plot of a nice lightcurve with an amplitude of maybe a magnitude or more (very slow rotators tend to have large amplitudes). If you’re a glutton for punishment and want to prove this idea, try observing 288 Glauke. Its “day” is nearly 60 Earth days! The only way data is bad is if you can prove that it really is bad. That means that you didn’t cause the problem by measuring images when there was a faint star in the measuring aperture or that you didn’t use a comparison star that was variable. It means that your camera was not operating properly, that you didn’t have bad darks and/or flats, or that you had an unexpected or known light leak, e.g., the full moon shining straight down the scope and causing all sorts of reflections. In short, there are many reasons for bad data, but if you can reasonably eliminate those reasons, then the “bad data” must be considered valid. Throwing out data
136 Analyzing the Data
points to meet expectations is very poor science. In college, we called it, “curving a fit” instead of “fitting a curve.” A parallel question to the one about bad data is, when is “noisy data too noisy.” In other words, “at what point is my data bad?” There is a probably some basis in mathematics and statistics that would give a quantitative answer. In more practical terms, if the amplitude of the curve is 0.5 m and your data has a scatter of about 0.05 m, all other issues aside, you won’t have any problem getting a good curve and, most likely, a period. On the other hand, if your data has a scatter of 0.05 m and the amplitude of the curve is on the order of 0.05–0.10 m, you may have a very hard time, but you may still be able to find a curve, as in Fig. 10.5. There, the noise in the data is only 0.01–0.02 m, but that’s about one-third the total amplitude. Was this data too noisy? As for the causes of noise, the sections on photometry dealing with getting a good signal-to-noise ratio, along with good flats and darks, are the most important things you can do to eliminate noise. The next step comes during the measuring phase when you select the size of the measuring aperture and sky annulus (see page 41). Once you’ve done all you can, then it’s up to sky conditions and the target. It could be that the target is noisy in and of itself and that nothing you can do will make the data any better. In this case, do your best and try to make lemonade out of lemons. Maybe by doubling the number of smaller lemons, you can get just as much lemonade, i.e., overwhelm the noise with more data. As to when to give up, that time comes when you can prove to yourself there is no solution, or – like the popular cartoon character – you declare, “Enough is enough and enough is too much” and move on to another target with hopes of better luck.
Figure 10.5 A lightcurve for 2460 Mitlincoln. The period is about 2.72 h.
Chapter 11
Period Analysis The primary reason for obtaining a lightcurve is to find its period and amplitude. The first tells you the rotation rate of the asteroid or the period of the orbit for a binary star system. For an asteroid, the amplitude gives an approximate ratio of the largest and smallest face presented during a rotation. For example, if the curve has an amplitude of 0.4 m, this implies that the ratio is about 1.44, i.e., Ratio = 10(Amplitude/2.5)
(11.1)
Note that this sets a minimum and there are dependencies on taxonomic class and phase angle. If the curve isn’t symmetrical — i.e., one maximum is higher than the other — then you can use the magnitude difference between the two maximums to determine an approximate ratio of the surface areas of the opposing sides. Now you’re starting to see how lightcurves can determine the shape of an asteroid. There’s more – lots more – to the process, but this is a start. For a variable star, the amplitudes of the primary and secondary maxima help determine the inclination of the orbit and the temperature difference between the two stars, among other things. For the initial coverage of this topic, I’m going to assume you’re using MPO Canopus, or a program that has similar features. Afterward, I’ll give an example of how to do period analysis using a spreadsheet.
11.1
About Merging Data and Setting Zero-Points
Period analysis, or at least one approach of it, requires that you get all your data referenced to a common point. In differential photometry, this is the equivalent of using the same set of comparison stars for all sessions. This doesn’t necessarily mean you really do use the same set of comparisons. In variable star work, it might be true. For moving targets, it rarely is. Instead, I mean that each set’s differential values are adjusted to compensate for statistical or systematic errors, changing geometry (moving targets), and different average comparison star magnitudes. What remains is a set of differential values from all sessions that is based on a single, arbitrary fixed zero-point. If you reduce your observations to absolute values on a standard system, then the bulk of the adjustments are incorporated into the final absolute values. There may be small systematic errors and those to account for changing geometry but those are much easier to see, compute, and negate. 137
138 Period Analysis
The problem of data matching is not as severe when working a lightcurve where all or a large part of it can be obtained in each session. In this case, you may have several reference points, e.g., the minima and maxima, that can be used to bring each session into line. Before computers, analysts matched sessions by plotting the data for each session on a separate piece of graph paper, placing the several pages over one another on a light table, and sliding each them up or down until it matched with others. They were adjusting the zero-points. With longer-period lightcurves, where you can’t get more than a single extreme point in a given session, if that, several complications arise. The most important is that even if you do catch one of the extrema points such as a maximum on two sessions, you can’t be absolutely certain it’s the same one point on the curve. Assuming a bimodal curve with two minimums and maximums per rotation, you may be seeing the opposing maximums. The solution seems trivial to state: get more data. If the period, or an integral or half-integral multiple of it, is not extremely close to the time span between sessions, then with enough sessions, you eventually get data over a significant part of the curve. If necessary, get help from someone well removed in longitude to shoot just before or after your run so that, in effect, you get a longer run. The assumption so far has been that the curve is nearly symmetric, which makes it difficult to tell which minimum/maximum pair you’re seeing. Should the curve have a strong feature and that feature is caught in each session, you have a latch point that you can use to narrow the range of solutions. The slope of the curve is often a crucial clue when making initial estimates of the period. In a typical bimodal curve for an asteroid, the slope of the ascending and descending sections should be such that the time to go from one extreme to another, e.g., a maximum to a minimum, is about 25% of the total period. Slopes that differ dramatically from this rule of thumb, and the period that generated them, should be viewed with suspicion but not necessarily rejected out of hand. As more data is accumulated and one of the longest runs is used as a guide, a rough estimate of the period evolves. With each additional session, confidence in a proposed solution builds if the slope of the data is in the expected part of the phased curve. If a session produces data with a slope that contradicts the proposed solution, e.g., a session’s slope is descending when it should be ascending, then the solution needs to be re-evaluated. At some point, you can predict the location the data from each session should have in a phased plot and adjust the zero-point, if necessary, so that the data fits in with the other sessions. If a reasonable fit cannot be made using the rule of thumb regarding the slope, the solution should again be viewed with suspicion. Also, you should be wary of solutions that appear to be forced. This could be indicated when several sessions start or end just before or after a prominent feature in the lightcurve, such as a maximum. If too many of the sessions just miss an identifiable feature, you should be asking if you are that unlucky or if the solution is wrong.
Period Analysis 139
Figure 11.1 A raw plot of 771 Libera. The goal is to find the period and amplitude of the asteroid using this lightcurve and one from a second night.
11.2
A Simple Start
Let’s analyze a simple curve, one that nearly resembles a regular sine wave. It’s by pure luck you get something so simple. Be thankful when you do. Fig. 11.1 shows the plot of raw data from the first run for of 771 Libera. It almost looks like an eclipsing binary. At the very least, it’s very regular in shape and size. What makes this curve even more useful is that it appears that more than one full revolution was caught in the one night. This makes period analysis much easier since it means eliminating any aliases in the period. Don’t worry: you’re not far away from learning about those pesky aliases. 11.2.1
What is Normal?
I just said that it appears that a little more than a full revolution was covered. How do I know that? An educated guess backed by no absolute evidence. Using the pool of lightcurves available so far and other evidence such as spacecraft photos, the general assumption is that most asteroids have a shape similar to an American football. In more technical terms, the object is a triaxial ellipsoid, meaning the cross sections through the X-, Y-, and Z-axes would be elliptical to one degree or another. A sphere is a special case of a triaxial ellipsoid where all three cross sections are circular. Imagine this spinning football (or potato as some like to picture it) with the Zaxis being the polar axis that is at right angles to the line of sight. Twice during a single rotation you’re looking at the broadside of the object and twice you’re looking at the ends. You can see at this point that the lightcurve would have two maximums and minimums. This bimodal curve is “normal” for an asteroid.
140 Period Analysis
Now imagine if the Z-axis is pointing straight towards you and that you, the Sun, and the asteroid are on a straight line, with you being in the middle. In this case, you’re looking at one of the poles. It’s unlikely that you’re going to see much of a change in the lightcurve during a full cycle. Looking at such a nearly flat lightcurve, you might guess the asteroid wasn’t rotating. That’s very unlikely. If a curve is obtained months or years later that does show some variation then the two curves combined can make the beginnings for finding the shape and orientation of the spin axis for the asteroid. Not all asteroids have bimodal curves. As you’ll see, some can have three and even four minimums and maximums per rotation. A nearly spherical asteroid might have a single maximum, e.g., when a high albedo formation comes around each rotation. In short, you can start with the assumption that the curve is bimodal but be open to any number of possibilities. 11.2.2
Adding Data From Other Sessions
Using only the data from that first run, the Fourier analysis routine developed by Alan Harris that is included in MPO Canopus found a period of 5.81±0.03 hours. Remember to always give an error estimate with your results. Had the data covered more of the curve, say, to include at least one more minimum or maximum, the precision might have been a bit higher. However, a single night is rarely enough to state the period with a great deal of precision. “More Data!” is the constant battle cry of the asteroid lightcurve photometrist. Fig. 11.2 shows the plot of the raw data from two sessions for the asteroid. The second session was four nights later than the first. The point of showing this plot is to indicate how one can get a rough initial estimate of the alignment of the data. From this, you can adjust the zero-point of the second session so that the high and low points of its data roughly correspond to those of the first. Once the zero-points are roughly set, you can plot all the data in what’s called a phased plot. This is where one assumes a period and a starting date/time. The elapsed time between a given observation and the starting date/time is divided by the assumed period. The fractional part of the result is the phase and is always greater than or equal to 0 and less than 1.0. The differential magnitudes are plotted against this fraction instead of the absolute date. Fig. 11.3 shows a phased plot of the data from the two sessions with an assumed period of 5.8 hours. As you can see, the zero-point offsets are very close, since the curves are closely aligned vertically. However, the agreement along the horizontal axis is not very good. That’s the result of not having the right period but it is close. Through the process of trying different starting periods and smaller intervals, a period of 5.889±0.001 h was found. A final phased plot is shown in Fig. 11.4. Throughout the process, you must be open to the possibility that the initial guess is wrong, that the lightcurve is not bimodal curve, etc. By using some common sense based on experience, the “true” solution usually presents itself.
Period Analysis 141
Figure 11.2 The raw plot of two sessions. Plotting the data this way allows you to get an approximate alignment of the data before merging it into a phased plot. This technique generally works but is purely arbitrary. If the magnitudes have been put on a standard system, then all the sessions should be nearly aligned.
Figure 11.3 A phased plot of two sessions. The data from two sessions has been put into a phased plot, which is where the X-axis is in fractions of the period. The vertical alignment is not perfect, meaning the zero-point between the two sessions is not right. The horizontal displacement is caused by an incorrect period.
Figure 11.4 The well-tempered phased plot. The zero-points for the two sessions have been properly set and the correct period found.
142 Period Analysis
11.3
To What Precision
There are many methods for finding an accurate estimate of the error in the period. One is to change the period against which the data is phased by a small amount and see how well the data still fits. The two plots in Fig. 11.5 show the data phased against the original period minus 0.02 h (left) and 0.01 h (right). You can plainly see that the data no longer fit very well at all – even with a shift of only 0.01 h. It’s not until the shift is reduced to 0.005 h that it gets difficult to see the difference between the original plot for 5.889 h and the revised plot. Going by this method, a precision of 0.005 h, maybe down to 0.002 h, would be justified. The other method for determining precision is based on some analytical rules. Just from general experience, Alan Harris has found that lightcurves can be “overlapped” to plus or minus a few percent of the period, depending strongly on the amplitude and on the quality of the data. To get a 1% solution, you need data overlap spaced by several cycles. To get 0.1%, you need several tens of cycles data span, and so forth. In most cases, the worry is more about the risk of missing the right number of cycles, so cycle ambiguity (aliasing) is the biggest concern, rather than the formal uncertainty of a particular solution. Table 11.1 gives some specific examples.
Figure 11.5 The plot on the left shows the data phased to a period 0.02 h shorter than the actual period. The plot on the right is against a period only 0.01 h less. This shows how a phased plot can reveal even small errors in the assumed period. Period (hours) 2 5 10 24
3 rotations (hours) 1% 6 15 30 72
Precision (hours) 1% 0.02 0.05 0.10 0.24
30 rotations (hours/days) 0.1% 60 / 2.50 150 / 6.25 300 / 12.50 720 / 30.00
Precision (hours) 0.1% 0.002 0.005 0.010 0.024
Table 11.1 The recommended degree of precision based on the period and number of rotations that occurred within the time span of the data. As noted in the text, this is more a guideline to avoid ambiguous solutions (aliases) and not necessarily establish the error for a given solution.
Period Analysis 143
Using this table, what precision would be justified for the data for 771 Libera? Since the two sessions were separated by four days (96 hours) and the period is about 5.9 h, the asteroid rotated about 16 times between the start of the first session and the start of the second session. This is about halfway between the 3 and 30 rotations for 1% and 0.1% rotation, so a good estimate is about 0.4% precision (it’s a logarithmic scale, not linear). That translates into 0.02 h. We’ve already seen that changing the period by that small amount dramatically alters the fit of the data and that it is too large an error. Since the data from both nights covers a full cycle, or nearly so, there really is little worry about finding a period other than the correct one. In this case, you can use the “sliding period” approach or use the error reported by the Fourier routine. If using the latter approach, I tend to round it upwards, i.e., 0.018 becomes 0.02, 0.0043 becomes 0.005, and so on. In the current example, I’d use 0.002 h or 0.003 h, assuming that a check with the sliding period method confirms that the resulting error isn’t too small or large.
11.4
Refining the Search Process
Most period search programs allow you to set a beginning period, the size of each step, and the number of steps, or periods, to be used in the search. What is the best sampling rate when exploring a range of periods? The object is to sample periods often enough that successive solutions will not differ by too much, meaning you won't skip over a minimum in the period spectrum without noticing it. Minima in the lightcurve show up without wasting time oversampling if you choose a period increment such that successive period choices differ by less than about 1/10 of a cycle over the entire data span. Thus, Delta = 0.1 * (P2 / T)
T = time span of the data.
(11.2)
For example, example, if T = 125 hours and P =5.7 hours, Delta = 0.1 * (5.7 x 5.7) / 125 = 0.026 hours To be safe, it's best to use a step a little smaller than this value. If you are using Fourier analysis, you can change the number of harmonic orders involved in the solution. Usually, the more orders involved, the more refined the period. That does not go on forever. If the data does not completely cover the curve, the Fourier analysis can derive some interesting solutions as it tries to fill in the missing data for its solution. Also, sometimes going another order higher doesn’t significantly improve and can actually degrade the solution. Also with Fourier analysis, you can also look at the period spectrum, which is a plot of the residual fit versus the period. Minimums in that plot are periods that are more likely correct than other periods. Analyzing the period spectrum is most
144 Period Analysis
useful when the data has a number of possible solutions because of incomplete coverage of the entire curve or aliases. There are several examples of period spectra coming up in the Aliases section.
11.5
The Amplitude of the Lightcurve
Be careful about relying on the software’s estimation of the amplitude of the lightcurve. If the curve is not completely covered, Fourier analysis attempts to fill in the missing data. This can lead to an amplitude for a curve that is very far removed from reality. Also, bad data points can affect the amplitude determination. This is where a plot and some common sense come into play. With a clean plot, such as the one in Fig. 11.6, it is easy to estimate the maximum and minimum values and assign an amplitude. The amplitude is usually given as the maximum difference in the curve. The curve has two minimums, but they are of different depths. Use the brightest maximum and faintest minimum when making your estimate. In this case, I would give the value as 0.54±0.02 m.
Figure 11.6 A lightcurve where the amplitude is easy to estimate.
Figure 11.7 A lightcurve where the amplitude is more difficult to estimate.
Period Analysis 145
The plot in Fig. 11.7 is not so easy. This is the asteroid I’ve mentioned several times where noisy data was overcome by large amounts of data. This is a great technique for period work but not so good for finding the amplitude. The first rule is do not use the absolute brightest and faintest individual points to find the amplitude. Instead, do a visual estimate of the average value of the maximum. You could also use a spreadsheet, take a range of values near maximum, and have the spreadsheet calculate the mean and standard deviation. Do the same for the faintest minimum. Use the difference between the two values for the amplitude of the curve. If you do use the spreadsheet approach, remember to use the two standard deviations correctly to achieve the final error, i.e.,
Errortotal =
(Error1 )2 + (Error2 )2
(11.3)
My estimate for the right-hand curve amplitude is 0.28±0.03 m
11.6
Aliases in Depth
An alias for a period is another period where the data seemingly fits as well, or nearly so, as the original period. The most common encounter with aliasing is when you observe a target at regular intervals and the period has an even or halfmultiple that is almost exactly the interval between observing sessions. For example, if you work a target that has a period of 8 hours and you start observing at almost exactly the same time each night, you’re observing the same part of the curve each night. That’s because you are observing the target at the start of its fourth cycle from the original point in the curve. The same general idea holds true if the period is 4 or 6 hours. There’s an example of this below. Another time you’ll run into aliasing is when you’re working a target with a symmetrical curve, i.e., one where it is very hard to tell one maximum/minimum pair from the other, and the interval between observations is a half-multiple of the period. For example, say you are working a target with a period of 5.333 hours. If the observing interval is 24 hours, then the target has gone through 4.5 cycles. If the curve is symmetrical, you don’t know – unless you get enough data on a single night – which part of the curve you’re working. Since 5.333 hours is probably short enough to get enough data in a single run, try a period of 16 hours. If session two is a multiple of 24 hours later, then you’re working the other half of the curve. When the period is sufficiently short, such that you can catch most if not the entire curve in a single run, then the issue of aliasing is almost moot. That’s because you can almost find the period from a single curve. I say almost because you can still get caught if you have a minimum number of sessions that are separated by a very large number of cycles. For example, I took a series of images in November of one year for asteroid lightcurve work. It wasn’t until a few months later
146 Period Analysis
that I tried looking at the images again to search for variable stars. As it turns out, I found one. Fig. 11.8 shows the phased lightcurve from those images. By the way, be sure to note that the plots for this variable follow the standard conventions for variable stars, or at least eclipsing binaries: the primary minimum is at 0% phase, all the differential magnitudes are positive, and the times are corrected for Heliocentric Julian Date. Also, the period, if shown, is in days instead of hours (the period is 6.564 h for those of you – like me – who usually deal with hours). Having found the star was variable I obtained more observations. Unfortunately, I wasn’t able to get as complete a run as the first. However, it was enough to combine the two sessions to get a slightly refined period. Fig. 11.9 shows a phased plot of the merged data.
Figure 11.8 A phased plot of an eclipsing binary. Only the data from the first session is used here.
Figure 11.9 The phased plot of a variable star using two sessions. Compare to Fig. 11.10. The fits are very similar yet the periods differ by about 230 seconds. You must be careful when merging two sessions separated by a large number of cycles since a large number of closely related periods can give the same apparent fit of the data.
Period Analysis 147
Figure 11.10 A phased plot for the wrong period. This plot used the same data as used for Fig. 11.9. The difference is that the assumed period for this curve is 229 s less, or 6.498 h instead of 6.546 h.
The change in the period solution was 0.001 d, or 86.4 s. The time between the two sessions was about 56 d, or 204.81558 revolutions. Or was it? Presume for a moment that I didn’t get complete coverage on the first night, such that I couldn’t get a better initial period. I might have come up with a period of 0.2707667 d (6.4984 h) and a difference of 0.00265 d (229.0 s). It also amounts to 208.62 cycles. A phased plot based on that alternate period is shown in Fig. 11.10. It’s hard to tell the difference and, if I didn’t know better from the first night’s run, almost impossible to say which was the right one. I mentioned before the period spectrum that’s available with Fourier analysis. Fig. 11.11 shows the period spectrum plot in Canopus for the period search that lead to the accepted solution for this system, i.e., 0.2734167 d. The X-axis is the period – in days – that was sampled, and the Y-axis is the RMS residual of the data points against the ideal curve. Each one of the minimums in the period spectrum plot represents a potential solution. As you can see, there are several where the residual values differ by the smallest of amounts. What would have made solving this period much easier was to get a second session of data the next night or as soon as possible after. That way, there would have been a minimum number of revolutions between sessions and the number of possible alias periods would have been significantly less. Note that getting successive nights is not required to be at the start or even the end of all your sessions. You could get a session in November, two back-to-back in December, and one in January. The main point is getting two sessions as close as possible to one another. Had I been able to work the variable the night following my session two months after the initial session, I’d also have met the requirement of getting two closely spaced sessions. Using the second and third sessions would have eliminated many of the aliases while the first, being some 200 revolutions removed, would have refined the period to a very high degree.
148 Period Analysis
Figure 11.11 Fourier analysis period spectrums from MPO Canopus. The RMS fit is plotted on the Y-axis and the period in days is on X-axis. Minimum points indicate a possible solution. The high number of solutions is the result of having a minimum number of sessions spaced by a large number of cycles.
Figure 11.12 An intriguing period spectrum. The complex curve and high number of possible solutions is the result of data separated by a large number of cycles. See the text for an explanation of the complex nature of the period spectrum.
What in the world is going on in Fig. 11.12? First, you need a little background. I observed the asteroid 1263 Varsavia for a few days in April one year. I was not able to return to it and so my friend Bob Stephens added quite a bit of extra data during May. I did a period search after Bob’s first run and got the above period spectrum. Alan Harris gave us insight on the meaning of this particular spectrum. The broad “sinusoidal” curve with minimums at 8.3 hours, 10.8 hours, and 12.5 hours correspond to fits of the April data with half-cycle shifts between successive period minimums. The high frequency ups and downs are minimums corresponding to half-cycle shifts between your April data and Bob Stephens’ one session. The structure is easy to understand, whether to believe any of the minimums is the correct period solution would require seeing the resulting fits.
Period Analysis 149
Figure 11.13 The final period spectrum for 1263 Varsavia. Be careful, even when there’s a “obvious” solution, it’s not a guarantee.
As it turns out, “None of the above,” was the right answer. After merging all the data sets, the period turned out to be around 7.23 hours. The final period spectrum is shown in Fig. 11.13.
11.7
Plotting the Half-Period
Sometimes you can eliminate a number of aliases by plotting the data against onehalf the proposed period. If instead of a bimodal curve or pure chaos, you get something close to a monomodal curve, with only one maximum and minimum, then the proposed period has a very real chance of being correct. Let’s go back to that variable star and plot the half-periods, i.e., 3.157 h and 3.186 h. (Fig. 11.14) Now the “truth” is easier to see. The left-hand plot is the half-period for the adopted solution of 0.2744167 d (6.564 h); the plot on the right is half the “what if” period, 0.2707667 d (6.498 h). Concentrate on the fact of how the maximums line up. The left-hand plot is clearly better. If the curve is bimodal and you have a number of possible solutions, the half-period plot can be a big time saver.
Figure 11.14 On the left is the half-period of 3.157 h, while on the right the halfperiod is 3.186 h. The half-period method can help eliminate aliases but it doesn’t always exactly predict the true period.
150 Period Analysis
Figure 11.15 The plots from two sessions for 1022 Olympiada. The derived period for each is different from the other.
Figure 11.16 Phased plots using both data sets. The plot on the left is phased against a period of 3.83 h. The plot on the right used a period of 4.6 h.
11.8
A Specific Alias Example
Fig. 11.15 shows two plots of the raw data for 1022 Olympiada using data acquired in June 1999. Fig. 11.16 shows two phase plots of the combined data. Using the Fourier analysis in MP Canopus, the left-hand figure had a period of 3.83 h, while the period for the data shown in the right-hand figure had a period of 4.6 h. There is good agreement in the phase angle for the extrema and the overlap of data seems reasonable for both. Which is the right period? When you have a conundrum such as this, check to see if you might be finding two periods that differ by one revolution over the time between observations. In this case 4.60 h * 5 rotations = 23.00 h 3.83 h * 6 rotations = 22.98 h
Period Analysis 151
As you would expect from this whole discussion, the starting times of the two sessions were separated by 23 hours. Resolving aliases is one of the biggest challenges in determining lightcurve periods, especially for those that have multiples close to 24 hours. In this case, “More data!” is the only solution. By the way, it turns out that after remeasuring the images, including a “lost” night, and working the analysis again, the shorter period of 3.83 h is the more likely solution. When the results doesn’t make sense, double-check the data. Never be afraid to check your work and, if necessary, revise your results.
11.9
The Case of 3155 Lee
Just when you thought you are getting the hang of things, I’ll give you a parting reminder that you should never make assumptions about a curve.
Figure 11.17 The lightcurve of 3155 Lee phased against 4.16 hours. This is an example of forcing a square peg into a round hole. This period is not right.
Figure 11.18 The final plot for 3155 Lee. The period is about 8.3 hours. This is a rare example of a curve that shows four pairs of extrema during each rotation. The asteroid is believed to be a cast-off following a collision between Vesta and another asteroid.
152 Period Analysis
3155 Lee is a 10 km diameter asteroid that was discovered by Brian Skiff of Lowell Observatory in 1984. It’s named after the famous Confederate States of America general, Robert E. Lee. The significant point of interest about this asteroid is that it’s believed to be a splinter off the large main belt asteroid Vesta, created when Vesta and another asteroid collided with one another. I obtained initial images of the asteroid one year in November and, making the initial assumption of a bimodal curve, found a period of just over 4 hours. The fit (Fig. 11.17) was really very bad and it should have made me suspicious. After I posted the results on the Collaborative Asteroid Lightcurve Link Web site (CALL), I received a message from Dr. William Ryan of New Mexico Highlands University, Las Vegas, NM. He told me that 3155 Lee was part of a study he had been conducting and that his results indicated four maximums and minimums per rotation. I ran two more sessions and found that by assuming the curve had four maximums and minimums that the data fit fairly well for a period of about 8.3 hours. Fig. 11.18 shows that phased plot. That new period is in close agreement with Dr. Ryan’s. The lightcurve suggests an irregular shape, which is to be expected if the asteroid was created as a result of a collision between Vesta and another body. You need to be open to solutions that are outside the norm. Sometimes, as I did in this case, you might try to fit a square peg in a round hole by forcing a solution to fit expectations instead of letting the data dictate the solution.
11.10 Period Analysis on a Spreadsheet There are many programs available to measure the images you take for lightcurve work but a smaller number that perform the period analysis, at least those tailored to astronomical purposes. However, most people have a spreadsheet program of one kind or another, and this can be used very effectively for lightcurve period analysis. The only requirement for your measuring program is to be able to export the date and magnitude pairs for use in the spreadsheet. It’s important to keep in mind some of the concepts discussed earlier. For example, you should correct the observation date/times for light-time if working asteroids or Heliocentric Julian Date if working variable stars. Try using the “wrong” one for the given target and compare the period results. You might be surprised at how dramatically the period solution changes when you don’t use the corrections or apply them in the wrong way. 11.10.1 The Essential Data A simple text file with the date and magnitude pairs is all you need. For example, J.D. 51439.61155 51439.61586
O-C 0.065 0.080
Period Analysis 153
Figure 11.19 Finding the period of a lightcurve in a spreadsheet. The text explains the columns and formulae used to determine the period of the curve. The correct period is found when the same inflection points in the curves align vertically.
Fig. 11.19 shows the data in a spreadsheet with some additional cells and a graph. The JD column is the light-time-corrected Julian Date minus 2400000.0. The O–C column represents the differential magnitude in the sense of Object minus Comparison. Revs
Number of revolutions since a fixed date based on an assumed period
Phase
Fractional part of Revs
Plot Mag
Differential magnitude plus an offset so that the data from the given session can be separated vertically from other session data
There are three other cells of particular importance. D3
Period, in hours, to be analyzed
E3
Period in days; it uses the formula D3 / 24.000 The result in E3 is given a label (Period) so that the spreadsheet doesn’t change cell references when copying and pasting cells
H3
Offset applied to the second session so that its data can be separated vertically from the first session. It has been labeled “CurveOffset” so that the spreadsheet can refer to it by name. If you have more than two sessions, you need to create an additional cell for each additional session to hold that session’s offset value.
154 Period Analysis
An example of a cell formula for the Revs column is (A7 – 51400.0) / Period A typical Phase column formula is D7 - INT(D7) while the Plot Mag column formula for the second session follows B76 + CurveOffSet The cells holding the Plot Mag for the first session have no offset value. It is the “reference session” or “Session Zero.” When working with this setup in a spreadsheet, you don’t need to be concerned about the true offset for the various sessions since you’re not trying to merge the data into a single curve. Instead, you’re trying to get the same parts of the curve to match vertically. This is a computerized version of the old days of plotting the data on graph paper and sliding one graph over another until the curves matched and then figuring the period from the time difference between the matched points. There is still an advantage to trying to merge the data into a single curve – you can get a more critical match of the minimums and/or maximums. Changing the value in D3, the period in hours, causes the value in E3, the period in days, to change and the plots to shift horizontally in relation to one another. In Fig. 11.20 you see the result of changing the period by only 0.01 h, to 5.88 h. Note the slight misalignment of the maximums near 0.400 phase. The new value is close but it is not the correct period. Fig. 11.21 shows the effect of a 0.1 h change.
Figure 11.20 The lightcurves when the period is changed slightly from the true solution. The lower curve has shifted to the left.
Period Analysis 155
Figure 11.21 A change of only 0.1 h produces a dramatic misalignment between the two curves.
Figure 11.22 The final plot. The true period has been used. Vertical lines were added to show the alignment of the curves and the offset value for the second curve has been changed so that it merges with the first curve almost perfectly.
Finally, let’s put the period back to 5.889, include some vertical lines in the plot, and move the second session down so that it merges with the first (Fig. 11.22). This helps confirm that the proposed period is very close to the true period. Another set of data would be helpful, but you are well on your way. The offset to move session 2 to merge with session 1 is –0.080. It’s no coincidence that the asteroid was predicted to be 0.09 magnitudes brighter for the second session. The session 2 data had to be moved down so that it was referenced to the same approximate magnitude of the asteroid as in session 1.
156 Period Analysis
Quiz time! What’s wrong with all the plots so far? The Y-axis is upside down. The more positive values should be at the bottom. Fig. 11.23 shows the correct plot. There’s a more analytical reason why the plot in Fig. 11.23 is correct. Go back to the spinning potato model for an asteroid with the spin axis at right angles to the line of sight. The asteroid is seen broadside when its as its is brightest. As the asteroid rotates, the total viewing area changes relatively slowly at the brightest part of the curve. However, when the asteroid is seen nearly end on, the viewing area is much smaller and changing more rapidly. For the even more technically minded, this all has to do with intensity, not magnitude, ratios. Given this, you can see why the curve is broader when near maximum than when near minimum. Of course, the shape of the object has lots to say about the curve. If the object is somewhat “cube-like” in that it has about the same surface area when seen “broadside” and “end-one”, then the curve will be more symmetrical. Using Eq. (11.1) on page 137, and assuming that the Z-axis (spin axis) of the asteroid is “c” and c = 1, compute the ratios a/c, b/c, and a/b Make note that the two minimums are not the same, so the surface area of the two “ends” of the asteroid are not the same.
Figure 11.23 The correct plot. Positive values are towards the bottom.
Period Analysis 157
11.11 From Lightcurve to Shape I’ve mentioned several times that with enough lightcurves of an asteroid it’s possible to determine the shape and spin axis of the asteroid. This “inversion” process has taken great strides in the past few years. Among those leading the way are Mikko Kaasalainen and Stephen Slivan, both of whom have published excellent papers on the topic. To close with the analysis section for asteroids, I’ll show four examples from just one of Kaasalainen’s papers (see citation on the copyright page). Compare the shape that was found against the lightcurves, Figs. 11.24– 11.27.
Figure 11.24 For each lightcurve in the figure above, and for all the figures, the top row is the date in day, month, year format. On the bottom line, the first value is the phase angle at which the lightcurve was obtained. The second two values are the Earth and Sun aspect angles.
158 Period Analysis
Figure 11.25 Four of the lightcurves used to determine a shape for 1990 SB. This is a typical “spinning potato,” being very elongated in one axis. As might be expected, the lightcurves are bimodal – having two maximums and minimums per rotation.
Period Analysis 159
Figure 11.26 The highly elongated shape of 1627 Ivar would lead you to believe it always has a large-amplitude lightcurve. This will not always be the case. Imagine if the viewing angle had been such that you were looking down the axis of rotation. Then, you would see very little change in the lightcurve. The difference in the shape and amplitude of the curve with different viewing angles of the rotation axis and sunlight hitting the surface provide significant clues for the shape modeler.
160 Period Analysis
Figure 11.27 3908 Nyx is not so easy an asteroid to model. While it’s not spherical or smooth by any means, it is certainly not as elongated as the other examples. That’s reflected in the amplitude of the lightcurve. Again, the amplitude of a single lightcurve does not determine the shape of the asteroid but it does put constraints on the relative ratios of the three axes as presented to the observer at the time observations are made.
Chapter 12
Building Star Systems The asteroids have had their say. Now its time for variable stars, specifically eclipsing binaries, to take center stage. EBs, as they are called for short, were my first serious interest and so it’s only natural that I give them some equal time. There are many excellent books on variable stars and EBs in particular. Some are very readable. Some make me think my third year of calculus was overly easy. Check the Bibliography for a small sampling of what’s available. There’s some pretty nifty software available that allows you to take your lightcurve measurements, plug them into the program, and start building models of systems. As you go through the process, you see the curve the model would create and can compare it to yours. When the two match, you have a possible solution. This is a very addictive process. You may find yourself spending more time making stars than getting photons if you’re not careful. I say “possible solution” because unless you also have spectroscopic data that shows how fast the stars are moving in their orbits and at what point in the lightcurve the velocities ever reach zero, you usually cannot uniquely define a binary system. You can come close and keep the range of possibilities to a minimum, but without that extra data, you don’t quite have all the pieces to the puzzle. Those books in the Bibliography can tell you why much better than I. For this section, I’ll be using screen shots from a program called Binary Maker, written by Dr. David Bradstreet of Eastern College, St. Davids, PA. Information about Binary Maker is included in the Bibliography. I want to thank Dr. Bradstreet for granting his permission to use the screen shots and for his very kind help as I struggled through equipotential points, mass ratios, and so on.
12.1
Getting Started
This can’t possibly be a complete tutorial on using a program such as Binary Maker. Just as with asteroid lightcurves, there’s a lot of intuition and experience built into the process. So, the plan is to give you a quick overview of what Binary Maker does and then show you how changing one critical parameter while keeping the others constant changes the theoretical curve. Then, when you start working with your own data and the theoretical curve doesn’t fit your data, you have an idea of what to change. It often helps to have a good road map before taking a trip. In the case of working with binary star modeling, nothing beats Dirk Terrell et al.’s Binary Stars – A Pictorial Atlas. This book is page after page of lightcurves for known systems 161
162 Building Star Systems
and the critical parameters needed to model the given system. You can save a lot of time by going through the book and finding a lightcurve that matches or at least closely resembles the curve from your data. Use the parameters in the book as a starting point for modeling a curve to fit your data. Why re-invent the wheel? 12.1.1
Preparing the Data
The text files you’ve seen previously for asteroid lightcurve work are not quite right for most star-modeling programs. What those modeling programs need is a series of phase and normalized flux pairs. This means that the magnitude values are almost all between 0 and 1.0, with an occasional data point going above 1.0. You can also use values of 0 to 100 or other normalization scale. Binary Maker examples are in the range of 0 to 1, so I’ll go with that scheme for now. Below is a small sampling of such a file prepared from the data for the variable star that we’re going to study. Since the pairs use phase instead of actual data, the left-hand column values are always in the range of 0 to 0.999... Phase 0.0021 0.0092 0.0419 0.0490 0.0561 0.0632 0.0702 0.0773 0.0844 0.0915
Flux 0.5754 0.5739 0.6492 0.6662 0.6893 0.6589 0.7332 0.7600 0.7720 0.7914
Figure 12.1 A portion of the spreadsheet that converts JD/magnitude pairs to Phase/Flux pairs for use in Binary Maker. The cells and columns are explained in the text.
Building Star Systems 163
MPO Canopus automatically generates this data for you in a format compatible with Binary Maker and the Windows version of the Wilson–Devinney program. Of course, you can use a spreadsheet to generate the data as well. Fig. 12.1 shows a portion of the spreadsheet used to create the phase-flux data. Period
B2 is a named value cell (“Period”) so that it can be used as a constant value through out the spreadsheet. This is the period, in days, for the data. This would have been found earlier using one of the period analysis techniques.
Zero Date
B3 is also a named value cell (“ZeroDate”) with a constant value. This is the arbitrary zero-point date used to compute the phase. All the Julian Date values in this spreadsheet are ActualJD – 2400000.0 and are corrected to Heliocentric Julian Date.
Base Mag
B4 is another named value cell (“BaseMag”) but is calculated. It stores the minimum differential magnitudes starting in B8. The formula is MIN(B8:B239)
J.D.
Cells A8 and down contain the Julian Date of the observation minus a base value of 2400000.0.
OM
Cells B8 and down contain the differential magnitude of the target using the sense of Target – Comparison.
Phase
Cells D9 and down contain the calculated phase for the observation. For example, the formula for D9 is (A8-Zero_Date) / (Period-INT((A8-Zero_Date)/Period)
Flux
Cells E9 and down contain the calculated flux based on the differential magnitude. The flux formula is Flux = 10(-0.4 where
* (Mag – BaseMag))
Mag BaseMag
Differential value Minimum of all differential values
The formula for cell E9 is POWER(10,-0.4 * (B8-Base_Mag))
Before you can continue, you need to sort the observations in order of ascending phase. In Microsoft Excel® (Fig. 12.2):
164 Building Star Systems
Figure 12.2 Selecting the cells for a data sort. The Phase/Flux pairs must be in ascending order by Phase.
1. Select all the data cells, as shown above. In your data, there may be many more rows than you see on the screen. Be sure to include those as well. 2. Select Data | Sort. 3. Set the sort column to ‘D’ (Phase). Once you have the data sorted, save the file as a text file or copy just the data cells in columns D and E and paste them into a text file. For sake of compatibility with Binary Maker, give the file a NRM extension, e.g., MyData.NRM. The only thing that should be in the text file are the phase and flux values. Do not include headers or other data. Also, be sure the spacing between columns on every row is the same.
12.2
Binary Maker
Now that you have data, you need to load it into Binary Maker. This is explained in the Binary Maker documentation. You will be loading the NRM file that was created by MPO Canopus or that you saved using the spreadsheet. Fig. 12.3 shows a sample screen from Binary Maker after finding a solution. The plot on the lower left is of the observed data and theoretical data. You have a possible solution when it’s hard to tell the two sets of data apart. At lower right is the set of theoretical radial velocity curves, one for each star. Since no actual radial velocity data is available, it’s not possible to make a comparison nor, if you recall, is it possible to uniquely define the system. In fact, what you’re seeing is one of ten similar but slightly different solutions that all fit the observed data rather well. At upper right is a graphical representation of the system. This is likely a W UMa system where both stars have overflowed their Roche limits and are in contact with one another. Table 12.1 gives some of the more important values from Binary Maker.
Building Star Systems 165
Figure 12.3 A screen shot from Binary Maker. The lower left shows the theoretical curve against observed data, the lower right shows the radial velocity curves, while the upper right shows a model of the system. Value mass ratio input (x, 1/x) Omega 1 / 2 Omega inner /outer C1/2 C inner / outer Fillout 1 / 2 Lagrangian L1 / L2 r1 / r2 (back) r1 / r2 (side) r1 / r2 (pole) r1 / r2 (point) Surface area Mean radius Inclination Wavelength Temperature Luminosity
Star 1 3.500000 7.195497 7.270564 3.802937 3.836300 0.120000 0.625512 0.527273 0.500946 0.464284 0.625512 3.463455 0.497501 73.000 5500.00 4400.00 0.6499
Star 2 0.285714 7.195497 6.645007 3.802937 3.558275 0.120000 1.491548 0.310437 0.273895 0.262457 0.374488 1.012447 0.282263
4900.00 0.3501
Table 12.1 Binary Maker results after modeling an eclipsing binary system.
Getting a good model requires knowing the star’s temperature. If you can find the color index of the star, then you know the approximate temperature. In this case, I took V and R images to get the (V–R) color index and so the temperatures. Don’t be concerned if you don’t understand all of the terms. The Binary Maker manual and texts in the Bibliography will help make sense of them.
166 Building Star Systems
12.3
The Many Possibilities
The solution I found in the previous section was just one of ten or so that come close to fitting the observations. That took a lot of playing around with the parameters within Binary Maker. The next sections are aimed at trying to save you a little time and effort by showing you what happens to the theoretical curve when one parameter is changed and the others are held constant. Nature, of course, would not allow you to find a solution so easily. The effects caused by changing one parameter can often be offset by changing the value of another parameter. You’ll see this often when you start working with your own data. Eventually, experience, along with more reading will lead you to finding solutions more quickly. However, do take time to just play. Create imaginary systems and play what-if games, or use the data supplied with Binary Maker or from other sources. As the saying goes, “Practice makes perfect” (were it only true with my piano playing). All of the examples are based on the overcontact binary that’s shown above. Needless to say, there are many other types of eclipsing binaries, each with their own set of rules – all made to be broken. An important point to watch in the examples is not only the depth of the theoretical curve but it’s shape. At times you’ll see it flatten out at a minimum. In others, there seems to be almost a two-tier drop to primary minimum (what are called “shoulders”). Many of the changes to the curves can be too subtle to be seen in the screen shots. That’s why playing with the program on your own is so valuable.
12.4
The Effects of Changing the Inclination
For this section, all parameters except the orbital inclination were held constant. Remember that inclination is the angle of the system orbit to the line of sight. So, if we see the orbit from one of the poles, the inclination is 0°. If we see the orbit edge-on, then the inclination is 90°. Refer to Fig. 12.3 for the “base” or “normal” system. Fig. 12.4: The orbital inclination is 60°. Both the theoretical and actual data curves are plotted. The theoretical curve is shallower at both minimums because less of each star is covered during a minimum than in the original solution. Fig. 12.5: The orbital inclination is 70°. The theoretical curve is still the shallower of the two but is almost the same as the data curve. Fig. 12.6: The orbital inclination is 80°, almost edge-on. The theoretical curve has deeper minimums than the data curve. The smaller star is almost completely covered, causing a slight flattening of the curve at primary minimum. Fig. 12.7: The orbital inclination is now 90°, fully edge-on. At primary minimum, the smaller, but hotter, star is completely covered, which causes a decided flattening. Note the shoulders are rounded, however. This is due to limb darkening.
Building Star Systems 167
Figure 12.4
Figure 12.5
Figure 12.6
Figure 12.7
168 Building Star Systems
12.5
The Effects of Temperature Changes in the Primary
Some very dramatic changes can occur if you change the temperature of either star or both. Let’s take a look now when the primary, the smaller but hotter star, changes temperature while other parameters are kept the same. Fig. 12.8: All parameters are the same as from Table 12.1 except that the temperatures for the two stars were changed from 4500K and 4900K to 5500K and 6000K, respectively. The theoretical curve has shallower minima. Why? Fig. 12.9: The temperature for the larger star has been reset to “normal,” meaning it is at 4500K, and will stay there. The temperature of the smaller star has been set to 4000K. The theoretical curve is shallower at the left-hand minimum but deeper at the right. Note that the effect on the primary minimum is much more pronounced. Fig. 12.10: The temperature has been increased to 4200K. The left-hand minimum is deeper but still some ways from the observational data. Fig. 12.11: The temperature is now 4400K. The two curves are still merging, but the left-hand minimum is still shallower that the data would indicate. Fig. 12.12: I’ve jumped way ahead to where the temperature is now 5200K. The theoretical curve is deeper at the left-hand minimum, while the secondary minimum is just a bit shallower than the observed data curve.
Figure 12.8
Figure 12.9
Building Star Systems 169
Figure 12.10
Figure 12.11
Figure 12.12
12.6
The Effects of Temperature Changes in the Secondary
Let’s reverse the role of which star holds steady while the other changes by now keeping the original primary at its temperature of 4900K and change the temperature of the secondary, which has a temperature of 4500K in the original solution. Fig. 12.13: This figure shows the curves when the secondary has been chilled to only 4000K. The primary minimum (left) is much deeper than the data. Remember this is when the hotter star is behind the cooler. Since the star in front is much cooler than the “normal,” the total light of the system is much less.
170 Building Star Systems
Fig. 12.14: The secondary is just slightly cooler that the original starting point, being at 4400K. The theoretical curve is still a bit deeper than the data set, while the two are almost perfectly merged at the secondary minimum. Figure 12.15: The roles are almost reversed again. The secondary star temperature is 4800K, just 100 less than the primary. The theoretical curve is much shallower at the primary minimum, while it is just a bit deeper at the secondary minimum.
Figure 12.13
Figure 12.14
Figure 12.15
Building Star Systems 171
12.7
The Effects of Changing the Mass Ratio
Fig. 12.16: All the parameters are from the original model except that the mass ratio is now 2.0. The primary and secondary minimums are deeper than the actual curve. Fig. 12.17: The mass ratio has been set to 3.0. The theoretical curve is just a bit deeper than the actual curve. Fig. 12.18: The mass ratio is now 5.0, making the theoretical curve shallower at both minimums.
Figure 12.16
Figure 12.17
Figure 12.18
172 Building Star Systems
12.8
The Effects of Gravity/Limb Darkening and Reflection
Fig. 12.19: The original curve, one of several solutions found for the variable star. Fig. 12.20: Gravity and limb darkening have been disabled. The theoretical curve is shallower at both minimums and its shape a little more rounded. Fig. 12.21: With the reflection effect disabled, the curves are almost back to normal. There is a slight change just before and after the maximum at the middle of the plot. In particular, there seems to be a “bump” on the descending side. One model, which assumes a star spot, gets a very close fit to this hump.
Figure 12.19
Figure 12.20
Figure 12.21
Chapter 13
Publishing Your Data and Results Some of the most common questions I’m asked deal with how the observer can be sure his data is put to use. For those doing asteroid astrometry, simply turning in an accurate position has immediate merit as that position can be used to refine the orbit of an asteroid. Lightcurve work does not always offer such instant gratification. However, rest assured that the data can and will be used, as long as you collect it and make it available. There are several levels for making your data available. The first is to publish well-documented results, where you explain your methods of gathering and analyzing the raw data. The second is to also include the raw data itself. This allows researchers direct access to the information that generated your results. It could be that by having access to the original data and combining it with other data sets something new can be learned by others. A third option is to post your raw data, possibly with a period analysis, on your own or a common web site. Then other researchers can get your data and use it in their studies. There is the legitimate concern that making the data available puts it into the public domain and so anyone can claim it. That can be overcome in part by waiting to make the data available until after you publish the results. You can also make the data available only to those making a specific request and getting an agreement for assuring proper credit. However you want to handle that situation is up to you but you should make your data available at some point.
13.1
Confirm Before You Publish
Good science requires that you have solid evidence for your findings. That includes checking to see if there is previous work on your target. If so, do the results match? If not, you need to establish why your results are correct and previous results are not. It can be as simple as saying that you tried to fit your data to a previously reported rotation rate for an asteroid and the data would not fit. If you can get the original data from the previous work and make it fit your results, that’s even better. If your findings challenge current theory or generally accepted norms, then remember the words of Carl Sagan, “Extraordinary claims require extraordinary evidence.” Check and double-check your data and analysis. If you’re still satisfied, then report your findings and explain, if possible, why you believe them correct despite being contrary to what’s expected. 173
174 Publishing Your Data and Results
You also don’t want to claim to be the first to find something when you weren’t. Below is a partial listing of some web sites that you can check for previous journal articles and those published on-line. Of course, you can always use Google or some other search engine to check even more. The unfortunate thing is that no matter how much you dig through the Internet, you may still not find previously reported work. It’s happened to me. Do the best you can. 13.1.1
All Objects
NASA Astrophysics Data System (ADS): http://adswww.harvard.edu/ For a complete search, use the “fill in the blanks” form and the full text search.
13.1.2
Asteroids
NASA Planetary Data System, Small Bodies Node: http://pdssbn.astro.umd.edu/ CALL web site: http://www.MinorPlanetObserver.com/astlc/default.htm
13.1.3
Variable Stars
In addition to the AAVSO and IBVS sites listed in 13.3 on page 176, check All Sky Automated Sky Survey (ASAS): http://archive.princeton.edu/~asas/ Automated sky survey where you can search and plot data for a given star. General Catalog of Variable Stars: http://www.sai.msu.su/groups/cluster/gcvs/gcvs/ The GCVS is produced by a commission of the International Astronomical Union and is the primary catalog of variable stars. SkyDOT, or Northern Sky Variability Survey (NSV): http://skydot.lanl.gov/ SIMBAD: http://simbad.u-strasbg.fr/Simbad Allows searching through many catalogs for a star by name or position and plotting the position using one of several on-line image catalogs. Has links to the VisieR Service, which you should use if SIMBAD doesn’t find anything. The Amateur Sky Survey (TASS): http://www.tass-survey.org/ This catalog is also included in the SIMBAD and VisieR searches VisieR Service: http://vizier.u-strasbg.fr/viz-bin/VizieR Same general features as SIMBAD but often more complete.
Publishing Your Data and Results 175
13.2
Asteroids
13.2.1
The Minor Planet Bulletin
The foremost journal for amateurs to publish asteroid lightcurves is the Minor Planet Bulletin. This is a quarterly publication of the Minor Planets Section of the Association of Lunar and Planetary Observers (ALPO). It regularly features articles by amateurs and professionals presenting the results of their lightcurve studies. In recent years, the editors have been faced with the “golden problem” of having too many articles for a given issue. It wasn’t too long ago that they had trouble filling eight pages. Now a normal issue contains three to four times that many. Most important to amateurs is that the MPB is considered a refereed publication. It is indexed in the Astrophysical Database System (ADS) and is found in the libraries of major observatories around the world. This means your results are widely available to the entire professional community and, indeed, the entire world via the Internet. Subscription information can be obtained by visiting http://www.lpl.arizona.edu/~rhill/alpo/minplan/minplanbull.html You can also download the electronic versions (PDF) of the MPB starting with the last issue of 2004 from the CALL site (see below). The downloads are free, though a voluntary contribution is requested. The URL above has more information. The CALL site also has an author’s guide and Word template for submitting papers to the MPB. 13.2.2
Collaborative Asteroid Lightcurve Link (CALL)
The CALL site is located at http://www.MinorPlanetObserver.com/astlc/default.htm I covered this site earlier in regards to using it as a resource for determining which targets to shoot and forming collaborations with other observers (see page 115). The site also serves as a repository for both results and raw data. With the CALL site, you can post your results so that others can see that the asteroid has been worked or use the results in a study of rotation rates. The former helps avoid duplication, though duplication is not always bad. Sometimes it leads to one observer’s being able to help another refine or correct the results before they get published. I’ve been involved in several such episodes. Keep in mind that putting your results on the CALL site is not “official,” whereas publishing your results in the Minor Planet Bulletin is since it is a refe-
176 Publishing Your Data and Results
reed journal. If you want formal credit for your work, you should get it published in the MPB or, if circumstances warrant and allow, in one of the more technical journals such as Icarus.
13.3
Variable Stars
The number of locations where you can publish your variable star data and/or analysis is larger than for asteroids. The offset is that to do so, you will encounter a slightly less flexible set of standards. Don’t let that scare you away from this type of work. Most new processes are hard to learn to one degree or another. Once you master the required steps, you sometimes wonder why it seemed so hard in the beginning. You should also be willing to share your experience with newcomers. I do not subscribe to the belief, “I had a hard time when I was learning; you should, too.” 13.3.1
IBVS
The Information Bulletin on Variable Stars is a joint effort of two commissions of the International Astronomical Union: Commission 27 (Variable Stars) and Commission 42 (Close Binary Stars). Konkoly Observatory, Budapest, Hungary, publishes the IBVS. The URL is: http://www.konkoly.hu/IBVS/IBVS.html This is a technical journal and refereed, so you will need to be precise with your work and adhere to some strict standards, chief among them is using LaTeX – a special markup language. It is not the easiest thing to learn but it is worth the effort as most professional journals use this language or something close to it. Only a few accept common Word® or text files. Unless you are familiar with LaTeX, your initial efforts, if done without the help from a professional or amateur who’s previously published, may be rejected. Many of the past issues are available on line and you can use those to help you prepare a document, but, for your first few tries to publish, be sure to get some help from someone who has been through the gauntlet. You can easily find them by visiting the many variable star newsgroups or visiting the web sites for the major variable star groups around the world. 13.3.2
JAAVSO
The Journal of the American Association of Variable Star Observers is distributed by the AAVSO twice a year. In addition to carrying papers submitted by observers, it also contains reports on AAVSO activities and observation totals. You can
Publishing Your Data and Results 177
find details on how to publish a paper in the JAAVSO by visiting the AAVSO web site http://www.aavso.org/aavso/meetings/howpaper.shtml Some general guidelines include that the paper must be original work, written in English, and be about 2,500 to 3,000 words long. More specifics are available on the web site. If this link is not working, go to the home page at http://www.aavso.org/ and do a little bit of searching. The next options are more for putting your data into on-line services so that others can use the data in their research. You may not get any credit for the data you submit this way but the important point is that your data is available. 13.3.3
AAVSO
The AAVSO also allows you to post your observations directly to their databases by several means. The preferred method is via their web site where you can enter and submit the observations directly. The URL is http://www.aavso.org/observing/submit/webobs.shtml You do have to register and get an observer code to submit data, but you do not have to be a member of the AAVSO to do so. There is a handy program available from the AAVSO that allows you to maintain the data off-line and then submit via the above URL. I prefer this option since it also keeps a record of your observations on your system for future reference. The URL to download the program, PCObs, is http://www.aavso.org/data/software/pcobsinfo.shtml
13.4
Learn by Association
You have completed one stage of the journey you started at the beginning of this book. There are, I hope, many more to follow. I urge you to read some of the books in the Bibliography, if you haven’t already. You’ll find even more insights into photometry and may want to expand your efforts to include additional types of targets or do more advanced work. Don’t forget the astronomy magazines. Sky and Telescope and Astronomy allow you to keep up with the latest information without having to wade through all the technical journals yourself. There are numerous journals as well. I’ve mentioned the Minor Planet Bulletin and JAAVSO. Many clubs and organizations publish regular journals and newsletters. Check around and see if there isn’t one that interests you.
178 Publishing Your Data and Results
As I mentioned several times throughout, there’s no reason for you to make this journey alone. You’ll find lightcurve work much more enjoyable and satisfying if you share your experiences with others so that you can pass on what you’ve learned and, just as important, learn what others have done. Knowledge is power but it’s even more powerful when enhanced by experience. Sometimes you can’t get it all by yourself. I recommend that you become a part of an organization that appeals to your interests. If variable stars are your game, then the AAVSO and/or its counterpart in your country is the place to start. Don’t just join, but take part by visiting the newsgroups supported by the organization or developed by others that act in parallel to the official groups. In particular, try to attend one of the annual meetings. It’s a great way to put a face to the email names you encounter, to discuss old and new ideas, and to build on what you’ve learned. You’ll find the URL for many variable star groups in the Bibliography. For asteroid workers, there is the Minor Planets Section of the Association of Lunar and Planetary Observers (ALPO). The section is responsible for publishing the Minor Planet Bulletin, which is the place to publish your asteroid lightcurve results. ALPO holds annual meetings that you should consider if in your general area. The Astronomical League, the world’s largest federation of amateur astronomers, also holds annual meetings and has an Asteroid Observing club. This is mostly for visual observations but that doesn’t mean those involved don’t know their asteroids. The AL annual meetings are big affairs where you can meet a number of fellow amateurs. While general meetings are often informative, meetings that concentrate on specific parts of observing or research and include workshops are often the most rewarding in terms of picking up new ideas. The one I recommend most is the annual meeting of the Society of Astronomical Sciences. This group has held its workshop in the hills above Los Angeles for many years. In recent years the meeting has been devoted to the topic of using one’s camera and scope for scientific research. The topics have ranged from improving photometry technique to spectroscopy, lightcurve work, and searching for variable stars and extrasolar planets – among others. You can find information about SAS at http://www.socastrosci.org You can keep in touch with a large number of those involved in asteroid work by joining the Minor Planet Mailing List. The MPML has as regular visitors some of the leading asteroid researchers in the world. Anyone with an interest in asteroids is welcome as are all related questions. The URL is http://groups.yahoo.com/group/mpml
Publishing Your Data and Results 179
It seems the Internet offers no end to the number of newsgroups. You’ll find those related to CCD imaging, asteroids, telescopes, photometry, as well as many other related topics. Search them out and use them. I can’t count the times I’ve been stuck with a problem or question of some sort and had it solved or answered in short order by posting a message to a news group. It is amazing the wealth of knowledge that has been accumulated by “amateurs” in recent times. It really does make one realize that the line between amateur and professional is drawn with a thinner pencil every day and that the label of “amateur” has less now than ever to do with the level of knowledge and experience. I was fortunate to have two mentors over the years. Consider being one yourself. It’s one thing to share all that you’ve learned with fellow amateurs but it’s another thing entirely to help bring someone along who wasn’t necessarily interested to start. I’ve heard stories of everyday people who became top-notch asteroid observers simply because someone sat with them one night and showed them how to go about taking an image and measuring the position of an asteroid. Some of them took on second lives, practically living at the observatory during every free minute. I remember the story of one person’s project to get high school kids involved in asteroid astrometry. In that program, one student went from nearly dropping out to being among the best in his class because someone took time to show him that all those things he was learning could be applied to things he did at the observatory. Whatever and however much you do with lightcurve work, keep in mind that it should be fun to at least some degree. If you find it’s getting a little frustrating, try taking a visual or quick imaging tour of the Universe for a diversion. The asteroids and variable stars will still be there.
Good luck and clear skies!
Bibliography What’s below is only a partial list. I’ve read many of the books that are listed but certainly not all. It’s unfortunate that some of the better books are out of print and hard to find. Even if you can find them, the cost is sometimes prohibitive. Astronomy books do not sell like the latest Harry Potter novel. With their limited runs, the price must be higher to cover the production costs. Most of the books are available on Amazon.com or BarnesNoble.com. You can also check the web sites for some of the listed publishers. Willmann-Bell: http://willbell.com. Springer-Verlag: http://www.springer.de/ Cambridge University Press: http://uk.cambridge.org/ Dover Publications: http://store.doverpublications.com/
Asteroids Asteroids: A History. Curtis Peoples. pp. 280. Smithsonian Institution Press. ISBN: 1560983892. Asteroids. Gehrels, ed. University of Arizona Press. ISBN: 0816506957. Out of print. Also check http://www.uapress.arizona.edu/home.htm as well. Asteroids II. Binzel, ed. University of Arizona Press. ISBN: 0816511233. Also check http://www.uapress.arizona.edu/home.htm. Asteroids III. Bottke et al., ed. pp. 1025. University of Arizona Press. ISBN: 0816522812. Also check http://www.uapress.arizona.edu/home.htm. Dictionary of Minor Planet Names, 5th ed. Lutz D. Schmadel. Springer-Verlag. ISBN: 3540002383. Introduction to Asteroids: The Next Frontier. Clifford Cunningham. WillmannBell. ISBN: 0943396166. This is one of the best intermediate level books on asteroids. It is difficult to find a copy. If you see it, get it. Also check http://www.allbookstores.com/ T. Rex and the Crater of Doom. Walter Alvarez. pp. 208. Vintage Books. ISBN: 0375702105. From the title and cover, you’d think this is yet another “doomsday asteroid” book. In reality, it’s a well-written account on how the theory came to be that an asteroid destroyed the dinosaurs some 65 million years ago. 181
182 Bibliography
Variable Stars Cataclysmic Variable Stars. Brian Warner. pp. 592. Cambridge University Press. ISBN 052154209X. Cataclysmic Variable Stars: How and Why They Vary. Coel Hellier. pp 210. Springer-Verlag. ISBN: 1852332115. Variable Stars, J.S. Glasby. pp. 333. Harvard University Press. ISBN: 0674932005. Out of print but used copies can be found. Variable Stars, Michel Petit. John Wiley & Sons. ASIN 0471909203. Eclipsing Binary Stars: Modeling and Analysis. Josef Kallrath and Eugene F. Milone. pp. 355. Springer-Verlag. ISBN: 0387986227. Binary Stars: A Pictorial Atlas. Dirk Terrell et al. pp. 383. Krieger Publishing Co. ISBN: 0894640410. An Introduction to Close Binary Stars. R.W. Hilditch. pp. 392. Cambridge University Press. ISBN: 0521798000. Stellar Evolution. A.J. Meadows. Pergamon Press. 2nd ed. ASIN: 0080216692. A nice, easy read on the essentials of the topic without being too basic. No math. Out of print but used copies can usually be found.
CCD Imaging CCD Astronomy: Construction and Use of an Astronomical CCD Camera. Christian Buil. Willmann-Bell. ISBN: 0943396298. It may be a bit dated but Buil is one of the CCD gurus. The New CCD Astronomy: How to Capture the Stars with a CCD Camera in Your Own Backyard. Ron Wodaski. pp. 476. New Astronomy Press. ISBN: 0971123705. A Practical Guide to CCD Astronomy. Patrick Martinez. pp. 263. Cambridge University Press. ISBN: 0521599504. Handbook of CCD Astronomy. Steve Howell. pp. 176. Cambridge University Press. ISBN: 0521648343.
Bibliography 183
Image Processing The Handbook of Astronomical Image Processing. Berry, R. and Burnell, J. pp. 650. Willmann-Bell. ISBN: 0943396670. Practical Algorithms for Image Analysis: Descriptions, Examples, and Code. Seul et al. pp. 295. Cambridge University Press. ISBN: 0521660653.
Photometry Astronomical Photometry: Text and Handbook for the Advanced Amateur and Professional Astronomer. Henden, A. and Kaitchuck, R. pp. 394. Willmann-Bell. ISBN: 0943396255. As far as I’m concerned, the Bible for amateur photometrists. An Introduction to Astronomical Photometry Using CCDs. W. Romanishin. University of Oklahoma. Available on-line: http://observatory.ou.edu/book4512.html. Handbook of CCD Astronomy. Steve B. Howell. pp. 164. Cambridge University Press. ISBN: 052164834-3. Solar System Photometry Handbook. Russell M. Genet, ed. 1983. Willmann-Bell. ISBN: 0-943396-03-4. An Introduction to Astronomical Photometry. Edwin Budding. pp. 272. Cambridge University Press. ISBN: 0521418674. High Speed Astronomical Photometry. Brian Warner. Cambridge University Press. ASIN: 0521351502. Out of print but used copies usually available. The Measurement of Starlight: Two Centuries of Astronomical Photometry. J.B. Hearnshaw. pp. 511. Cambridge University Press. ISBN: 0521403936. You have to be a serious history buff to afford the price of more than $100 but it’s a good read.
Telescope Control Software MPO Connections. Bdw Publishing. http://www.MinorPlanetObserver.com TheSky. Software Bisque. http://www.bisque.com Astronomer’s Control Panel (ACP). DC3 Dreams. http://acp3.dc3.com/ Starry Night Pro. Space Software. http://www.starrynight.com/
184 Bibliography
Camera Control Software MPO Connections. Bdw Publishing. http://www.MinorPlanetObserver.com CCDSoft. Software Bisque. http://www.bisque.com MaxIm DL. Diffraction Limited. http://www.cyanogen.com/ AstroArt. MSB Software. http://www.msb-astroart.com/
Photometry Software MPO Canopus. Bdw Publishing. http://www.MinorPlanetObserver.com AIP4Win. Willmann-Bell. http://www.willbell.com/ MaxIm DL. Diffraction Limited. http://www.cyanogen.com/ Mira. Axiom Research. http://www.axres.com/ AstroArt. MSB Software. http://www.msb-astroart.com/ IRAF. IRAF Programming Group (UNIX/LINUX/MAC). http://iraf.noao.edu/ iraf-homepage.html.
Miscellaneous Star Testing Astronomical Telescopes: A Manual for Optical Evaluation and Adjustment. Harold Richard Suiter. pp. 376. Willmann-Bell. ISBN: 0943396441. Binary Maker. Binary star modeling program. David Bradstreet/Contact Software. Dept. of Physical Sciences, Eastern College, St. Davids, PA 19087. http://www.binarymaker.com Wilson-Devinney Program. Robert Devinney. FORTRAN code available at ftp://ftp.astro.ufl.edu/pub/Wilson/lcdc2003. A Windows version (WDWint) is available from Robert Nelson, [email protected]
Bibliography 185
Organization Web Sites I’ve listed only those sites that are in English. I apologize to those who are omitted because of my inability to read or speak other languages. Collaborative Asteroid Lightcurve Link (CALL). http://www.MinorPlanetObserver.com Society for Astronomical Sciences. http://www.socastrosci.org (formerly IAPPPWest) Association of Lunar and Planetary Observers (ALPO). http://www.lpl.arizona.edu/alpo/ American Association of Variable Star Observers. http://www.aavso.org. British Astronomical Association Variable Star Section (BAAVSS). http://www.ast.cam.ac.uk/~baa/ Center for Backyard Astrophysics. http://cba.phys.columbia.edu/ Association Francais des Observateurs d’Etoiles Variables (AFOEV). http://cdsweb.u-strasbg.fr/afoev/ Group of Eclipsing Binary Observers of the Swiss Astronomical Society (BBSAG). http://www.astroinfo.ch/bbsag/bbsag_e.html Royal Astronomical Society of New Zealand. http://www.rasnz.org.nz/index.htm Astronomical Society of South Australia – Variable Star Group. http://www.assa.org.au/sig/variables/ Groupe Europeen d’Observation Stellaire (GEOS). http://www.upv.es/geos/ Minor Planet Mailing List. http://groups.yahoo.com/group/mpml/
Glossary This glossary is by no means exhaustive. My intent was to include words used in this book that may need a little additional explanation or those words that I frequently encountered when starting out in lightcurve work. Books on specific topics such as photometry and variable stars will have more extensive glossaries. Absolute Magnitude For a star, the magnitude it would appear if it were 10 parsecs (about 32 lightyears) from Earth. For an asteroid, the magnitude the asteroid would appear if it were a distance of 1 astronomical unit (AU) from the Earth and Sun, and at 0° phase angle, which would include the brightening due to the opposition effect. ADU Analog-to-digital unit. In a CCD camera, this is the unit of value assigned to each pixel’s sum of electrons. A 16-bit ADU system has a range of values of 0–65,535. An 8-bit camera has a range of 0–255. A wider range of values allows a more precise determination of the actual number of electrons stored in a given pixel. The ADU value of a given pixel can be converted back to the actual number of electrons (or approximately so) by multiplying the ADU value of the pixel by the gain of the camera. Air mass The length of the path light takes through the Earth’s atmosphere. The value is 1.0 when an object is directly overhead. It approximately follows the formula of sec(z), where z is the angular distance of the object from the zenith, or the zenith distance. Albedo The amount of light an object reflects. Values range from 0% to 100% and are usually listed in the range of 0 to 1.0. Algol-type Binary A semi-detached binary system where the secondary star is a lower-mass subgiant that fills its Roche lobe and the primary is a more massive main-sequence star. Alias In lightcurve analysis, a period that appears to be the true period but is not. An alias period is often found when the data set cannot uniquely determine how many cycles of the lightcurve have occurred over the total time span of the data. In this case, the alias and true periods usually have a common integral or half-integral multiple that coincides with the time between observing sessions. 187
188 Glossary
All-Sky (Absolute) Photometry The process whereby the values required to convert instrumental magnitudes to a standard system are obtained by imaging stars from several locations about the sky. This method requires that sky conditions be very stable and clear. Altitude The angular distance of an object above the horizon with 0° on the horizon 90° directly overhead If the altitude is negative, the object is below the horizon. Amor Asteroids Asteroids having a perihelion distance of 1.017 < q < 1.3 AU. These orbits do not overlap Earth’s. Apastron The point in a non-circular orbit of a binary star where the two stars are farthest apart. Aphelion The point in an orbit where the object is furthest from the center of mass. Apollo Asteroids Asteroids having a semi-major axis > 1.0 AU and perihelion distance q < 1.017 AU. These orbits do overlap Earth’s. Appulse The close approach of one object to another, as seen against the sky. In reality, the objects may be light-years apart. The term is generally applied when planets and asteroids come close to stars or deep-sky objects. An appulse is different from a conjunction because it is the time when the two objects are closest, while a conjunction occurs when the two objects have the same Right Ascension. Argument of Perihelion The angular distance from the ascending node of an orbit to the perihelion point. Values range from 0° to 360°. One of six elements used to define an orbit. Ascending Node The angular distance in the plane of the ecliptic from the vernal equinox to the point where the orbit crosses the ecliptic going north (up). Values range from 0° to 360°. One of six elements used to define an orbit.
Glossary 189
Aspect Angle In reference to asteroids, the angle between the line of sight to the observer versus the direction of the spin axis of the asteroid. For example, if the observer is looking directly along the spin axis, he’s seeing mostly one of the poles of the asteroid. The shape and amplitude of the lightcurve can be dramatically changed by the aspect angle at the time of observations. Asteroid Literally, “star-like.” Asteroids are small non-cometary bodies that that orbit the Sun. Their sizes range from a few meters to nearly 900 km. They are believed to be left over from the early formation of the solar system. Asteroid Belt A region lying between the orbits of Mars and Jupiter where the majority of asteroids is found. Astronomical Unit The average distance from the Sun to the Earth, or approximately 92,956,000 miles (149,597,870 km). Aten asteroids Asteroids with a semi-major axis < 1.0 AU and aphelion distance Q > 0.983 AU. The orbits overlap Earth’s at their aphelion points. Average Daily Motion The angular distance an object travels in its orbit in one day, based on an average speed. Used in place of the semi-major axis as one of the six elements used to uniquely define an orbit.
µ = ( 0.9856076883 )/a 3 / 2
a = semi-major axis
Azimuth The angular distance along the horizon, from due north through east, where an arc going through the zenith (overhead point) and the object meets the horizon. 0° 90° 180° 270°
North East South West
Bimodal Curve A lightcurve that shows two maximums and two minimums per cycle. Binary Star A system where two stars are gravitationally bound and orbit one another.
190 Glossary
Binning The process where a, usually, square region of pixels on a CCD chip are combined during the download process or in software, to create a single, larger pixel. For example, 2X2 binning would group a square of four pixels and create a single pixel containing the total electron count from the four pixels and with an effective size that is double the actual physical pixel. CCD Charged Coupled Device. Often used to refer to a slice of material containing an array of thin semi-conductors (pixels). The pixels rely on the photoelectric effect to convert photons into electrons and then store the electrons. After an interval of time, the number of electrons in each element is read and stored in a computer. The values are then converted by software to shades of gray or color and displayed on a computer screen. Typical CCD devices have a quantum efficiency (QE) of 50–75%, with some approaching 95%. This makes them much more efficient than the human eye, which has a QE of only 1%. Centaurs A group of asteroids circling the Sun between the orbits of Jupiter and Neptune. They are believed to be from the Kuiper Belt and pulled into unstable orbits (106 years). Center of Mass The point in a two (or n-) body system that is the mean position of the mass within the system. In a binary star system, the ratio of the distance from this point to each star is proportional to the ratio of the masses of the individual stars. In the solar system, the position is near but not exactly at the center of the Sun. Class (Asteroid) The grouping of an asteroid based on its spectrum. There are many classification schemes, one of the more common ones being developed by Dr. David Tholen of the University of Hawaii. The two most common classes are S and C with the S asteroids tending to be reddish in color, while the C class asteroids have a more neutral or even bluish color. A new system, developed by Dr. Bobby Bus, is gradually replacing the Tholen system. Close binary A binary system where, at some point in its evolution, at least one of the stars reaches its Roche lobe and transfers matter to the other star. Cluster Variable Short-period Cepheid stars usually found in globular clusters. RR Lyrae stars. Color index The difference between the magnitudes of a given object in two different color bands. For example, the (B–V) color index is the value obtained by subtracting the
Glossary 191
magnitude of the star in the V band from the magnitude in the B band. The color index can be used to estimate the temperature of an object. For stars, this assumes that the interstellar reddening is negligible. Commensurate Orbits Orbits where the period of one is a simple multiple of another. Contact Binary (Also Overcontact Binary) Asteroid: A single asteroid made of two smaller bodies in contact with one another. Possible candidates include 4179 Toutatis and 216 Kleopatra. Binary Star: (see page 189) A system where both stars have filled their Roche lobes. The stars are usually in synchronous rotation and have circular orbits. The most common type is the W UMa class. Date of Osculation The specific moment for which the orbit is defined by the listed elements. Because of the influence of the Sun and planets on an asteroid orbit, the elements for that orbit are constantly changing. Therefore, it’s often necessary to know the date for which the given elements were defined. Declination The angular distance of an object north or south of the celestial equator. Positive is north. Detached binary A binary system where both stars are within their limiting (Roche lobes). Differential Photometry The process of determining the brightness of an object by taking the difference between its measured value and that of a comparison star (or average of several stars). Generally, in CCD imaging, all the comparisons and targets are in the same field, thus eliminating, or mostly so, all extinction considerations. The magnitudes are not on a standard system, unless the comparison star value has been transformed and any color terms that might affect the differential magnitude have been taken into account. Eccentricity The "roundness" of an orbit. Values range from 0.0 (perfect circle) to 0.999999 (highly elliptical). A parabola has an eccentricity of exactly 1.0. A hyperbola has an eccentricity >1.0 One of six elements used to define an orbit uniquely.
e = ( a2 −b2 ) / a
a = semi-major axis; b = semi-minor axis
The semi-major axis is half the length of the long axis of an ellipse while, the semi-minor axis is half the length of the short axis of an ellipse.
192 Glossary
Ecliptic The plane of the Earth's orbit as projected into the sky. Elongation The Sun–earth–object angle, i.e., the Sun–object angular separation as seen from the Earth. At opposition, this value is near 180°. When the object is in conjunction with the Sun, the value is near 0°. Eos Asteroids Asteroids with orbits tending towards a semi-major axis of 3.02 AU and inclination of 10°. Ephemeris A list of positions giving an object's Right Ascension and Declination and usually other information such as magnitude, Earth and Sun distance, etc. Epoch The date for when the location of the celestial equator and vernal equinox used as the references for an orbit is defined. The current standard is J2000 which is JD = 2451545.0. Equipotential Surface (or Equipotential) The surface on which the potential energy is the same everywhere. See the books on binary stars for a detailed discussion. Exoatmospheric Outside the Earth’s atmosphere. In photometry, magnitudes are converted to the value they would have above the Earth’s atmosphere before and transformations are made to a standard system. This is done by subtracting the affects of extinction. Extinction The dimming of light due to its passage through the Earth’s atmosphere. This is often measured in magnitudes per unit of air mass. The effects of extinction must be removed before the magnitude of an object can be put on a standard system. Extrinsic Variable A star where the changes in its brightness are due to circumstances other than changes to the star itself. The most common type of extrinsic variable is the eclipsing binary star, where the light changes are caused by one star moving in front of the other as seen from Earth. Full Well Depth The maximum number of electrons that a single pixel on a CCD chip can store.
Glossary 193
Full-Width Half-Maximum (FWHM) On a CCD image, the width of a star profile, in pixels, when the profile is one-half its maximum height. Seeing, a measurement of the steadiness of the atmosphere, also uses FWHM. In this case, it is the width of the profile in arcseconds. Flora Asteroids Asteroids having orbits tending towards a semi-major axis of 2.2 AU and inclination of 5°. Named after the largest member, 8 Flora. The region is sometimes divided into subregions Gain The conversion factor, given in units of electrons/ADU (e–/ADU), that relates the ADU value of a pixel on a CCD camera to the actual number of electrons stored in the pixel. For example, a common value for gain is 2.3, i.e., 2.3 e–/ADU. If the ADU value is 1000, then 2300 electrons were stored in the pixel. Geocentric Positions as seen from the center of the Earth. Particularly important when an object is close to the Earth. Gravity darkening The darkening, or brightening, of a region on a star due to a localized increase in the gravitational field. The effects are often seen in binary star lightcurves and are more pronounced in stars with radiative envelopes. Heliocentric Positions are seen from the center of the Sun Hilda Asteroids A family of asteroids whose orbits have a 2:3 commensurability with Jupiter, i.e., their orbital period is about 8 years. Hirayama Families Asteroids with similar elements, primarily semi-major axis, inclination, and eccentricity. Hungaria Asteroids Asteroids with orbits tending towards a semi-major axis of 1.95 AU and inclination of 23°. Inclination Asteroids The inclination of an orbit to the ecliptic, the plane of the earth's orbit. Values range from 0° to 180°. One of six elements used to define an orbit uniquely. If i < 90°, the object's motion is prograde, i.e., it moves about the Sun in the same direc-
194 Glossary
tion as the earth. If 90° < i < 180°, the motion is retrograde. All known asteroid orbits are prograde. Binary stars (c.f. pp. 189, 191) The angle between the plane of the sky and the orbital plane of the binary system. If the inclination is 0°, the orbit is seen pole-on. If the inclination is 90°, the orbit is seen edge-on. Intrinsic Variable A star where the changes in brightness are caused by changes to the star itself, as in the case of the Cepheids or Long Period Variables (LPVs), which change size and temperature as they go through their cycles. Instrumental Magnitude The brightness of an object measured directly from a CCD image. It does not account for extinction or use any transformations to convert it to a standard system. Julian Date The number of days since January 1, 4713 B.C. Julian Date is used since it is independent of the calendar in use. Kirkwood Gaps Voids in the asteroid belt where the orbital period for that region is an integral fraction of Jupiter's Koronis Asteroids Asteroids with orbits tending towards a semi-major axis of 2.88 AU and inclination of 2°. Kuiper Belt Objects Also known as KBOs. A group of asteroids, and possibly comets, that circle the Sun at the outer reaches of the solar system, i.e., from Jupiter to well beyond Pluto. The primary Kuiper Belts lies beyond Neptune to about 50 AU. Pluto is generally believed to be the first member of this class to be discovered. Latitude The angular distance of a position north or south of the Earth's equator. Lightcurve A plot of magnitude of an object versus time. The period of a lightcurve is the time between successive corresponding points in the curve. The amplitude is the peak to peak difference in magnitude.
Glossary 195
Lightcurve Photometry Photometry performed for the specific purpose of obtaining a lightcurve of a variable object and then analyzing the lightcurve for its period, amplitude, and any other information. Limb Darkening An effect where the edge (limb) of a star looks darker because the line of sight passes through cooler layers than when looking near the center of the star’s disk. The effect is more pronounced in blue light and for cooler stars. It is seen in lightcurves, especially for annular eclipses, where the bottom becomes rounded instead of flat. Linear Regression A mathematical process that finds a line that fits the data points in a set such that the sum of the squares of the distance from each point and the solution curve is a minimum. A perfect fit has a correlation of 1 (or –1). A totally random set of data has a correlation of 0. Longitude The angular distance of a position east or west of the prime meridian. Luminosity The amount of energy put out by a star for a given unit of time. Magnitude A measurement of the brightness of an object. In astronomy, the scale is logarithmic, with a one magnitude difference representing a ratio of brightness of 2.5118 (more exactly, 10–0.4). In the astronomical scale, brighter stars have smaller magnitudes, with the brightest stars having negative magnitudes, e.g., Sirius has a magnitude of about –1.5. Main sequence Based on the Hertzsprung–Russell (H–R) diagram, which plots the temperature of stars versus absolute magnitude. There is a pronounced “band” of stars going from lower right to upper left of this diagram. Stars within this band are members of the main sequence. They are generally characterized by having hydrogen cores that generate energy via nuclear fusion. Mass Transfer The exchange of matter between two stars. Mean Anomaly The angular distance of an asteroid in its orbit from the point of perihelion to its position on the date of osculation using an average angular velocity. Values range from 0° to 359.99999°. One of six elements used to define an orbit uniquely.
196 Glossary
M = ( JD − T ) / µ JD = Julian Date; T = JD of perihelion; µ = average daily motion Measuring Aperture The area on a CCD image within which the pixel values are analyzed to find the signal coming from an object. The final result is the sum of all the pixel values less the sky background. Usually the pixel values are in ADU (analog-to-digital units). They are usually converted to an equivalent number of electrons by multiplying the ADU value by the gain of the system. Gain is expressed in electrons/ADU (e– /ADU). Monomodal Curve A lightcurve that shows only one maximum (or minimum) per cycle. Noise (Period) Spectrum A result of Fourier analysis, the spectrum is a plot of the RMS fit of the data versus the periods that generated the fit values. A minimum in the plot, i.e., when the RMS fit is closest to the actual data, indicates a possible period solution. Nova (Novae) A star that has a sudden outburst of light, causing it to appear thousands if not millions of times brighter than before the event. Two basic types of novae are — Classic
A “one-shot” explosion caused by a star undergoing sudden fusion of it hydrogen-rich outer layers, leaving behind a small dense core.
Dwarf
A binary star where the brightening is somewhat regular and caused by the exchange of matter between a cooler, large secondary and its hot, small companion. The matter from the cool star is formed into an “accretion disc” around the hot star. On occasion, the matter in the disc “ignites,” causing the system to brighten by several magnitudes for a short time. The cycle repeats on the order of tens of days.
Opposition The time when an object's RA is 180° greater (or less) than the Sun's. It does not necessarily mean the object is exactly opposite the Sun. For that to be true, the object must be the same distance above or below the celestial equator as the Sun is below or above. In that case, the elongation of the object is 180°. Opposition Effect The excessive brightening of an object as it nears opposition. Offset (Lightcurve analysis) The difference between the reference magnitude for one session of data versus another. For example, if one is doing differential photometry on an asteroid and
Glossary 197
the average magnitude of the comparisons is 14.000 and then 13.000 on another session, the offset for the second session is 1.000 magnitudes and would be added to the differential magnitudes of the second session so that they could be compared directly to those of the first session. Optical Thickness The effective, as opposed to physical, thickness of a filter. Different materials have different refractive indices and so two filters, while having the same physical thickness, can change the focal point by different amounts when inserted into the light path. Parallax The apparent shift in the position of an object in reference to a distance point caused by viewing the object from different positions when it is relatively close. Pointing your thumb upward at arm’s length provides a simple demonstration of parallax. Close one eye and note the position of your thumb against a distant building or other reference point. Then look using only the other eye. The position of your thumb appears to shift in reference to that distant point. Periastron The point in a binary star orbit where the two stars are closest. Perihelion The point in an orbit about the Sun that is closest to the Sun Phase The position in a lightcurve in units of the period of the curve. This is used as an alternative to absolute time and allows data from spans greater than the period to be “folded” into a single curve. Phase Angle The Sun–asteroid–Earth angle, i.e., the angular distance of the Sun and Earth as seen from the asteroid. At opposition for the asteroid, this value is near 0°. Phase Coefficient A value used to compute the brightness of an asteroid which takes into account the sudden brightening of an asteroid near opposition. Phased Plot A plot where the data along the X-axis is placed in the range of 0% to 100% (or 0 to 1.0), that being the percentage of the period of the lightcurve. Phocaea Asteroids Asteroids with orbits tending towards a semi-major axis of 2.36 AU and inclination of 24°.
198 Glossary
Plane of the Sky A plane at right angles to the line of sight. Population I Stars Stars that favor the spiral arms of galaxies. They are believed to be younger than those of Population II stars. In general, these stars have higher ratios of metals (elements other than hydrogen and helium) because they are formed from already processed material. Population II Stars Older stars found in the core and halo of a galaxy. Position Angle The angle of a line joining two objects as measured counter-clockwise from north to east. North East South West
0° 90° 180° 270°
Preliminary Designation After an asteroid is discovered, it is given a designation until its orbit is determined with sufficient accuracy. After that, it is named by its discoverer or the International Astronomical Union and assigned a permanent number. The designation consists of the year of discovery followed by a two-letter code. The first letter tells in which half month of the year the discovery was made, e.g., A is the first half of January, B the second half, and so on. The letters “I” and “Z” are not used. The second letter is the order of discovery within the half month with A being the first, etc. If many discoveries are made, subscript numbers are used. For example, HZ is followed by HA1, HB1, etc. Primary Eclipse The deeper of the two eclipses in a binary star lightcurve, i.e., the one that causes the greatest fading of the entire system. This is usually the hotter star but not necessarily the one with the higher luminosity, since luminosity is based on temperature and size. Primary Star The main star in a binary system. The definition varies according to the field of study. For lightcurve work, the primary is the star in a binary system that when covered causes the deepest eclipses. In a Algol-type system, this is usually the smaller star. Generally, it is the hotter star.
Glossary 199
Plutinos Members of the Kuiper Belt (KBOs) that circle at distance of approximately 40 AU. Named after the first member to be discovered: Pluto. Radial Velocity The velocity of an object along the line of sight. In binary systems this value varies as the stars orbit one another. Radial velocity data is required to develop a complete model of a binary system. Radius Vector The distance from the Sun to the Earth in astronomical units Rectangular Coordinates Coordinates using three-dimensional XYZ coordinate system. The units are Astronomical Units. The positive X-axis points towards the vernal equinox, the positive Y-axis points towards the summer solstice, and the positive Z-axis points towards the North Ecliptic Pole. Regolith The fragmented, dusty or rocky surface of an asteroid. The depth can vary from a few millimeters to several kilometers. Right Ascension The angular distance of an object measured west to east along the celestial equator between the vernal equinox and its position. Values are in units of time ranging from 00:00 to 23:59:59.9999999... Roche lobe The maximum volume of space that a star in a binary can attain before mass transfer to the other star occurs. If a star fills or overfills this lobe, there is usually a transfer of matter to the other star, assuming it does not fill its lobe as well. Technically, this term applies only for stars in a circular orbit with synchronous rotation. Rubble Pile For asteroids, a conglomeration of material that is gravitationally bound to form a single body. The size and rotation speed of an asteroid determine whether or not it can have such a structure, i.e., small, fast rotators must be solid (monolithic) or they would fly apart. Semi-Detached binary A binary system where one star fills its limiting lobe while the other star is well separated from that star. The most common example is the Algol class.
200 Glossary
Semi-Major Axis The average distance of an object from the Sun. Also equal to half the length of the long axis of an ellipse. Used in place of daily motion as an element to identify an orbit uniquely. Sky Annulus The area on a CCD image within which the pixel values are analyzed to determine the average sky background. This value is then subtracted from each pixel value within the measuring aperture to obtain the actual signal value for the object. Slope Parameter The value used to define the extra brightening of an object when it is near opposition. This is the G value given in lists of asteroid orbital elements. Spectral type The classification of a star based on its temperature. The sequence, going from hottest to coolest, is OBAFGKM. Each type can be divided into ten subtypes, going from 0 to 9. A star that is A0 is hotter than one that is classified A5, which is hotter than an A9 star. Spin Axis The axis of rotation for an asteroid. To determine the spin axis means to determine the position in the sky to which the north polar axis points. Spousal Permission Units (SPU) Credits issued by one spouse to another so that the recipient may do something in the future, e.g., purchase a telescope, without incurring the wrath of the spouse issuing the credits. There is no actuary table that defines the number of SPUs required to cover the cost any given act. Their value is often volatile and subject to seasonal if not daily fluctuations. Note that SPUs do not accrue interest and, indeed, may lose value over time. Therefore, it is usually wise to redeem SPUs as soon as possible after they are issued. Standard Stars Stars used to calibrate a photometric system. Superoutburst A larger than normal outburst in a cataclysmic variable, but not on the scale of a nova. Taxonomic Class The general classification of an object. For asteroids, this usually refers to one of several classification schemes, the most commonly used being that developed by D. Tholen. Another, more recent, scheme has been introduced by S.J. Bus.
Glossary 201
Themis Asteroids Asteroids with orbits tending toward a semi-major axis of 3.13 AU and inclination of 1.5°. Topocentric Positions as seen from a point on the Earth's surface. Important for objects close to Earth. Transforms (Transformation Values) The values required to convert a raw instrumental magnitude to magnitudes based on a standard system, usually the Johnson–Cousins UBVRI. Trojan Asteroids Asteroids traveling in approximately the same orbit as Jupiter but preceding and following it by about 60°, i.e., in the Lagrangian Points where the gravitational effects of Jupiter and the Sun are nearly in equilibrium. Vestoids Asteroids believed to have been created following a collision between the large asteroid, 4 Vesta, and another body. They show similar spectral signatures as well as albedos. Some have been found to be binary asteroids. White Dwarf A small, hot, and extremely dense star nearing the end of its evolutionary cycle. It is often the remains of a main sequence star that reached giant stage and then expelled its outer layers, exposing only the hydrogen-rich core. W UMa binary A close contact (or overcontact) binary star system. The temperature of each star is less than 8000°K, the period of the orbit is less than 0.75 d, the total mass of the system is less than a few solar masses, and the mass ratio is well under 1.0. The stars are both members of the main sequence and have convective atmospheres. X-Coordinate Coordinates using the typical three-dimensional XYZ coordinate system. The units are Astronomical Units. The positive X-axis points toward the vernal equinox, the positive Y-axis points toward the summer solstice, and the positive Z-axis points toward the North Ecliptic Pole. Y-Coordinate Coordinates using the typical three-dimensional XYZ coordinate system. The units are Astronomical Units. The positive X-axis points toward the vernal equinox, the positive Y-axis points toward the summer solstice, and the positive Z-axis points toward the North Ecliptic Pole.
202 Glossary
YORP Effect The gradual increase or decrease in the rotation rate of an asteroid caused by thermal emissions. The asteroid is heated on its morning side by direct sunlight. The heat build-up is released on the afternoon side, giving a slight push to the asteroid’s rotation. Depending on whether the asteroid rotates in normal or retrograde motion, the asteroid slowly speeds up or slows down. The effect is most pronounced on smaller and irregular bodies. Spherical bodies are not affected or only very slightly. YORP comes from Yarkovsky, O'Keefe, Radzievskii, and Paddick, the authors of the paper that first explored the possibilities of sunlight altering not only spin rates but spin axis orientations. Z-Coordinate Coordinates using the typical three-dimensional XYZ coordinate system. The units are Astronomical Units. The positive X-axis points toward the vernal equinox, the positive Y-axis points toward the summer solstice, and the positive Z-axis points toward the North Ecliptic Pole. Zenith Distance The angular distance of an object from the zenith, i.e., the point directly overhead. The value is 0° when the object is at the zenith. The value is 90° when the object is exactly on the horizon. Zero-Point Photometry (Reductions) The constant value that is part of the transformation equation that converts a raw instrumental magnitude to a magnitude on a standard system. Once the instrumental magnitude has been corrected for extinction and color dependency, this value is applied to put the final magnitude on the standard system. The zero-point value can change from night to night, being affected by changes in the equipment, observing conditions, etc. Lightcurve analysis The value on an arbitrary or standardized magnitude system against which all differential magnitudes are referenced. For example, the average value of the comparison stars for the first set of observations of a target might be 14.000. This is taken as the reference or zero-point value. If, on another session, the average is 13.000, then all differential magnitudes for that second session must be increased by 1.000 magnitudes so that those differential values can be compared directly to the differential values in the first session. The offset is 1.000 magnitudes. The zero-point is 14.000.
Appendix A: Constellation Names The list below gives the name of each of the 88 constellations recognized by the International Astronomical Union (IAU), the official three-letter designation, and the Latin possessive. Andromeda
AND
Andromedae
Eridanus
ERI
Eridani
Antlia
ANT
Antliae
Fornax
FOR
Fornacis
Apus
APS
Apodis
Gemini
GEM
Geminorium
Aquarius
AQR
Aquarii
Grus
GRU
Gruis
Aquila
AQL
Aquilae
Hercules
HER
Herculis
Ara
ARA
Arae
Horologium
HOR
Horologii
Aries
ARI
Arietis
Hydra
HYA
Hydrae
Auriga
AUR
Aurigae
Hydrus
HYI
Hydri
Bootes
BOO
Bootis
Indus
IND
Indi
Caelum
CAE
Caeli
Lacerta
LAC
Lacertae
Camelopardalis
CAM
Camelopardalis
Leo
LEO
Leonis
Cancer
CNC
Cancri
Leo Minor
LMI
Leonis Minoris
Canes Venatici
CVN
Canum Venaticorum
Lepus
LEP
Leporis
Canis Major
CMA
Canis Majoris
Libra
LIB
Librae
Canis Minor
CMI
Canis Minoris
Lupus
LUP
Lupi
Capricornus
CAP
Capricorni
Lynx
LYN
Lynx
Carina
CAR
Carinae
Lyra
LYR
Lyrae
Cassiopeia
CAS
Cassiopeiae
Mensa
MEN
Mensae
Centaurus
CEN
Centauri
Microscopium
MIC
Microscopii
Cepheus
CEP
Cephei
Monoceros
MON
Monocerotis
Cetus
CET
Ceti
Musca
MUS
Muscae Normae
Chamaeleon
CHA
Chamaeleontis
Norma
NOR
Circinus
CIR
Circini
Octans
OCT
Octantis
Columba
COL
Columbae
Ophiuchus
OPH
Ophiuchi Orionis
Coma Berenices
COM
Comae Berenices
Orion
ORI
Corona Australis
CRA
Coronae Australis
Pavo
PAV
Pavonis
Corona Borealis
CRB
Coronae Borealis
Pegasus
PEG
Pegasi
Corvus
CRV
Corvi
Perseus
PER
Persei
Crater
CRT
Crateris
Phoenix
PHE
Phoenicis
Crux
CRU
Crucis
Pictor
PIC
Pictoris
Cygnus
CYG
Cygni
Pisces
PSC
Piscium
Delphinus
DEL
Delphini
Piscis Austrinus
PSA
Piscis Austrini Puppis
Dorado
DOR
Doradus
Puppis
PUP
Draco
DRA
Draconis
Pyxis
PYX
Pyxidis
Equuleus
EQU
Equulei
Reticulum
RET
Reticuli
203
204 Appendix A: Constellation Names Sagitta
SGE
Sagittae
Triangulum
Sagittarius
SGR
Sagittarii
Triangulum Australe TRA Trianguli Australis
TRI
Trianguli
Scorpius
SCO
Scorpii
Tucana
Sculptor
SCL
Sculptoris
Ursa Major
UMA Ursae Majoris
Scutum
SCT
Scuti
Ursa Minor
UMI
Ursae Minoris
Serpens
SER
Serpentis
Vela
VEL
Velorum
Sextans
SEX
Sextantis
Virgo
VIR
Virginis
Taurus
TAU
Tauri
Volans
VOL
Volantis
Telescopium
TEL
Telescopii
Vulpecula
VUL
Vulpeculae
TUC Tucanae
Appendix B: Transforms Example The reduction steps that use the concepts introduced in the Photometric Reductions chapter starting with “The Different Path” on page 48 will be covered in this and subsequent appendices. Those using MPO Canopus and MPO PhotoRed, which pre-program the reduction routines, should refer to the documentation for those programs. These appendices are for those using spreadsheets. It’s expected that you have basic skills for setting up formulae in cells that compute a result based on a combination of other cells. This example assumes that the (V–R) color index is used throughout the reduction process. The overall process is identical if using (B–V) and/or (V–I).
Example Transforms Data V Filter Name V R v1 v2 X ----------------------------------------------------------LW CAS 0045 12.991 12.510 -8.599 -8.628 1.080 LW CAS 0058 14.218 13.372 -7.356 -7.343 1.080 LW CAS 0145 15.332 14.369 -6.231 -6.225 1.079 LW CAS 0249 13.091 12.626 -8.535 -8.540 1.080 LW CAS 19451 14.398 13.995 -7.193 -7.235 1.080 LW CAS 19468 13.313 12.762 -8.301 -8.301 1.080 LW CAS 19566 13.839 13.462 -7.787 -7.778 1.080 LW CAS 19822 14.309 13.538 -7.266 -7.284 1.080 R Filter Name V R r1 r2 X ----------------------------------------------------------LW CAS 0045 12.991 12.510 -9.099 -9.104 1.079 LW CAS 0058 14.218 13.372 -8.190 -8.201 1.079 LW CAS 0145 15.332 14.369 -7.202 -7.193 1.079 LW CAS 0249 13.091 12.626 -8.991 -8.994 1.080 LW CAS 19451 14.398 13.995 -7.621 -7.625 1.079 LW CAS 19468 13.313 12.762 -8.843 -8.840 1.080 LW CAS 19566 13.839 13.462 -8.133 -8.149 1.080 LW CAS 19822 14.309 13.538 -8.048 -8.058 1.080 C Filter Name V R c1 c2 X ----------------------------------------------------------LW CAS 0045 12.991 12.510 -9.667 -9.665 1.079 LW CAS 0058 14.218 13.372 -8.571 -8.558 1.079 LW CAS 0145 15.332 14.369 -7.507 -7.517 1.078 LW CAS 0249 13.091 12.626 -9.567 -9.572 1.079 LW CAS 19451 14.398 13.995 -8.221 -8.225 1.079 LW CAS 19468 13.313 12.762 -9.359 -9.354 1.079 LW CAS 19566 13.839 13.462 -8.787 -8.782 1.079 LW CAS 19822 14.309 13.538 -8.476 -8.464 1.079
205
206 Appendix B: Transforms Example
The Spreadsheet The screen shot shows the spreadsheet developed for this reduction. Note that there are three pages in the Excel® notebook (a fourth will be added later). Each page holds the data for a separate filter. The V Transforms page is being displayed. The R and C pages are shown below for comparison. The formula setup is identical, the difference being that the values in Columns D through J will have the data for the filter on that page. Columns B and C contain the Henden catalog V and R values. Columns D and E contain the raw instrumental magnitudes from the two images taken for the transforms reduction. Column F contains the averaged air mass for the two exposures.
The cells in Column H contain the average of the two instrumental values. The generic formula for the cells is AVERAGE(Dx, Ex)
V–
The cells in Column I contain the difference between the catalog and average instrumental magnitudes. The generic formula is Bx – Hx
(V–R)
x = row number
x = row number
The cells in Column J contain the difference between the catalog V and R magnitudes. The generic formula is Bx – Cx
x = row number
Appendix B: Transforms Example 207
The plot is a type “X-Y Scatter.” The X values are those in column J, while the Y values are those from column I. A linear trend line was added to the plot by right-clicking on a data point and selecting “Add Trendline” from the popup menu. On the options page for the trend line, those for showing the formula and correlation values were selected. From the trend line formula, the transform for V would be V = Vo – 0.109 (V–R) + 21.655 where Vo is the exoatmospheric instrumental magnitude. In this example, the firstorder extinction terms were set to 0.0. See “First-Order Extinctions – Are They Really Necessary?” on page 50. The pages and transforms for R and C (to V) are below.
208 Appendix B: Transforms Example
The Hidden Transforms Remember that you need to find a second set of transforms that correlate the instrumental color index to the standard color index (see page 56). These allow you to find the standard color index for the comparisons and target as well as the standard magnitude of the comparison stars. These transforms are not used to convert target magnitudes to the standard system. The setup for this reduction is straightforward since it uses data from the original three worksheets. Use the screen shot above as your guide for the following steps. 1. Add a new worksheet and move it to the end of the notebook list. Rename it “Hidden Transforms.” 2. Copy A1:C9 from any of the other pages to the new page, i.e., Rows 1 through 9, Columns A through C. 3. From the V transforms page, copy Columns D and E (the V instrumental magnitudes) and paste them on the new page. 4. From the R transforms page, copy Columns D and E (the R instrumental magnitudes) and paste them on the new page. 5. Set up Column I on the new page. The cells in this column hold the average of the two V instrumental values, e.g,. cell I2 would have the formula AVERAGE(D2, E2) 6. Set up Column J on the new page. The cells in this column hold the average of the two R instrumental values, e.g., cell J2 would have the formula AVERAGE(F2, G2)
Appendix B: Transforms Example 209
7. Set up Column K on the new page. The cells in this column hold the differences between columns I and J, e.g., cell K2 would have the formula I2-J2 8. Set up Column L on the new page. The cells in this formula hold the differences between the catalog V and R magnitudes for each star, e.g., cell L2 would have the formula B2-C2 9. Create an X-Y scatter plot. Use the values in Column K for the X-axis and the values in Column L for the Y-axis. Make sure you use the correct values for the two axes. The solution you’re finding converts a given instrumental color index to the standard color index. If you reverse the roles of the two axes, then you won’t find the right color index values for the comparisons and target. 10. Add the trend line and be sure to include the option to display the trend line linear formula (the correlation is good to review the quality of the solution in quantitative terms). 11. From the example above, the formula to convert a v–r instrumental to (V–R) standard magnitude would be (V–R) = 0.977 (v–r) + 0.016 The slope should be close to 1.00 (here it’s 0.977), which would indicate a perfect match of your system to the standard system. If you get something significantly different, check the original data and formulae. If you still have problems, confirm that you were using the V and R as you thought. I once had the filter control software set up incorrectly and so images were being taken in R instead of V and vice versa. That makes for some very frustrating days and nights! Make sure you save the results from both sets of transforms, i.e., the primary and hidden. You’ll need them later on.
Appendix C: First-Order (Hardie) Example This example shows how to use a spreadsheet to compute the first-order extinction in a given filter using the modified Hardie method (see page 58). Recall that this method requires that you shoot two standard fields, one at low air mass and the other at a high air mass. As long as you stay more than 30° above the horizon, the goal is to get as large an air mass difference as possible. You can use the low air mass (higher altitude) field for the transformation calculations as well, thus saving you some time before starting to work the target field.
The Data V Filter Name V R v1 v2 X ---------------------------------------------------------------LW CAS 0045 12.991 12.510 -8.599 -8.628 1.080 LW CAS 0058 14.218 13.372 -7.356 -7.343 1.080 LW CAS 0145 15.332 14.369 -6.231 -6.225 1.079 LW CAS 0249 13.091 12.626 -8.535 -8.540 1.080 LW CAS 19451 14.398 13.995 -7.193 -7.235 1.080 LW CAS 19468 13.313 12.762 -8.301 -8.301 1.080 LW CAS 19566 13.839 13.462 -7.787 -7.778 1.080 LW CAS 19822 14.309 13.538 -7.266 -7.284 1.080 R Filter Name V R r1 r2 X ---------------------------------------------------------------LW CAS 0045 12.991 12.510 -9.099 -9.104 1.079 LW CAS 0058 14.218 13.372 -8.190 -8.201 1.079 LW CAS 0145 15.332 14.369 -7.202 -7.193 1.079 LW CAS 0249 13.091 12.626 -8.991 -8.994 1.080 LW CAS 19451 14.398 13.995 -7.621 -7.625 1.079 LW CAS 19468 13.313 12.762 -8.843 -8.840 1.080 LW CAS 19566 13.839 13.462 -8.133 -8.149 1.080 LW CAS 19822 14.309 13.538 -8.048 -8.058 1.080 C Filter Name V R c1 c2 X ---------------------------------------------------------------LW CAS 0045 12.991 12.510 -9.667 -9.665 1.079 LW CAS 0058 14.218 13.372 -8.571 -8.558 1.079 LW CAS 0145 15.332 14.369 -7.507 -7.517 1.078 LW CAS 0249 13.091 12.626 -9.567 -9.572 1.079 LW CAS 19451 14.398 13.995 -8.221 -8.225 1.079 LW CAS 19468 13.313 12.762 -9.359 -9.354 1.079 LW CAS 19566 13.839 13.462 -8.787 -8.782 1.079 LW CAS 19822 14.309 13.538 -8.476 -8.464 1.079
211
212 Appendix C: First-Order (Hardie) Example V FILTER Name V R v1 v2 X ---------------------------------------------------------------AK GEM 0030 13.374 13.028 -7.966 -7.964 2.084 AK GEM 0121 14.354 13.964 -6.964 -6.936 2.089 AK GEM 0209 13.947 13.355 -7.312 -7.304 2.086 AK GEM 0221 11.530 11.239 -9.805 -9.812 2.089 AK GEM 0249 13.303 12.897 -8.011 -8.015 2.085 AK GEM 0290 14.573 13.898 -6.675 -6.681 2.091 AK GEM 0307 14.162 14.005 -7.165 -7.167 2.086 AK GEM 0349 14.002 13.676 -7.281 -7.272 2.092 AK GEM 0398 14.016 13.813 -7.283 -7.306 2.089 AK GEM 0431 12.309 12.123 -9.040 -9.033 2.092 AK GEM 0462 11.467 10.922 -9.835 -9.852 2.089 AK GEM 0499 13.549 13.242 -7.752 -7.775 2.093
R Filter Name V R v1 v2 X ----------------------------------------------------------------AK GEM 0030 13.374 13.028 -8.289 -8.312 2.060 AK GEM 0121 14.354 13.964 -7.377 -7.338 2.065 AK GEM 0209 13.947 13.355 -7.994 -7.949 2.062 AK GEM 0221 11.530 11.239 -10.133 -10.094 2.065 AK GEM 0249 13.303 12.897 -8.434 -8.421 2.061 AK GEM 0290 14.573 13.898 -7.451 -7.373 2.067 AK GEM 0307 14.162 14.005 -7.339 -7.311 2.062 AK GEM 0349 14.002 13.676 -7.658 -7.616 2.068 AK GEM 0398 14.016 13.813 -7.520 -7.499 2.065 AK GEM 0431 12.309 12.123 -9.211 -9.217 2.068 AK GEM 0462 11.467 10.922 -10.423 -10.389 2.065 AK GEM 0499 13.549 13.242 -8.043 -8.088 2.069
C Filter Name V R v1 v2 X ----------------------------------------------------------------AK GEM 0030 13.374 13.028 -9.028 -9.067 2.036 AK GEM 0121 14.354 13.964 -8.053 -8.093 2.041 AK GEM 0209 13.947 13.355 -8.504 -8.534 2.038 AK GEM 0221 11.530 11.239 -10.841 -10.883 2.041 AK GEM 0249 13.303 12.897 -9.078 -9.147 2.037 AK GEM 0290 14.573 13.898 -7.944 -7.973 2.043 AK GEM 0307 14.162 14.005 -8.184 -8.220 2.038 AK GEM 0349 14.002 13.676 -8.341 -8.391 2.043 AK GEM 0398 14.016 13.813 -8.348 -8.364 2.041 AK GEM 0431 12.309 12.123 -10.047 -10.096 2.044 AK GEM 0462 11.467 10.922 -10.970 -11.020 2.041 AK GEM 0499 13.549 13.242 -8.809 -8.860 2.045
Appendix C: First-Order (Hardie) Example 213
The Spreadsheet The spreadsheet is going to contain three pages. Each page will hold the data for a given filter. I’ll cover the details for the V page only. The other pages are set up identically, save that the instrumental magnitudes and other appropriate substitutions for the given filter are made. As before, the example uses the (V–R) color index. To use (B–V) or (V–I), you would need observations in those filters and use the corresponding catalog values. Columns B and C contain the catalog V and R magnitudes of the stars. These columns are identical on all three pages. Columns D and E contain the instrumental magnitudes for the given filter; two images in each filter were taken. Column F contains the average air mass (X) for the two observations. Column G contains the transform for the given filter. The value was found using the procedure covered in the previous appendix.
Column H contains the average of the two instrumental magnitudes. The general formula is AVERAGE(Dx, Ex)
x = row number
214 Appendix C: First-Order (Hardie) Example
(V–R)
Column I contains the difference between the catalog values V and R. The general formula is Bx – Cx
v(adj)
x = row number
The adjusted v magnitude. This uses the reduction formula Mr = Mc – Mf – (Tf * CI) Mr Mc Mf Tf CI
reduced magnitude catalog magnitude in given filter (V for C) instrumental magnitude in given filter transform for given filter standard color of star using catalog values
The general formula for the cells in Column J then becomes Bx – Hx – (Ix * Gx) Mean X
x = row number
Cell F10 holds the average of the air mass values for the first Henden field. The formula is AVERAGE(F2:F9)
Mean
Cell J10 holds the average of the v(adj) values for the first Henden field. The formula is AVERAGE(J2:J9) The STDEV in cell J11is for information only; STDEV(J2:J9)
Mean X
Cell F25 holds the average of the air mass values for the second Henden field. The formula is AVERAGE(F13:F24)
Mean
Cell J25 holds the average of the v(adj) values for the second Henden field. The formula is AVERAGE(J13:J24) The STDEV in cell J26is for information only; STDEV(J13:J24)
The plot is an X-Y scatter. For the X values, select cells F10 and F25. For the Y values, select cells J10 and J25. Make sure you select only the two cells for each axis and not the range of cells. Add the trend line and display its formula. The inverse magnitude system (bright stars have smaller numbers) makes things a little confusing by causing a trend line with a negative slope. The first-order extinction is always positive. So take the absolute value of the slope.
Appendix C: First-Order (Hardie) Example 215
The R and C pages are shown below as guides for proper setup. Again, take the absolute value of the derived slopes. Also note that, as it should be, the R slope is slightly less than that for the V filter, and so k'v – k'r is a small positive number.
Appendix D: First-Order (Comp) Example In this example, you’ll see how to use a spreadsheet to compute the first-order extinction value for a given filter. This method uses the instrumental magnitude of a comparison start in the target field against the air mass for each image. To use this method, the conditions at your location during the observing run must be fairly consistent. An occasional passing cloud might be OK, but a steadily increasing haze is not.
The Data The table below shows the data used for this example. Read the data from top to bottom, first column, then second, and finally the third. X 2.024 1.952 1.889 1.830 1.776 1.725 1.679 1.635 1.594 1.556 1.521 1.488 1.457 1.428 1.400 1.375 1.351 1.329 1.308 1.288
C1IM -7.822 -7.827 -7.834 -7.845 -7.854 -7.869 -7.861 -7.872 -7.884 -7.888 -7.897 -7.885 -7.903 -7.912 -7.909 -7.910 -7.928 -7.925 -7.928 -7.929
X 1.269 1.252 1.236 1.221 1.207 1.194 1.181 1.170 1.159 1.150 1.141 1.133 1.125 1.118 1.112 1.107 1.102 1.098 1.095 1.092
C1IM -7.933 -7.941 -7.943 -7.936 -7.945 -7.944 -7.939 -7.943 -7.946 -7.947 -7.938 -7.955 -7.961 -7.957 -7.955 -7.953 -7.958 -7.961 -7.958 -7.957
X 1.089 1.088 1.087 1.086 1.087 1.087 1.089 1.091 1.093 1.096 1.100 1.105 1.110 1.116 1.122 1.130 1.138 1.147 1.156 1.166
C1IM -7.965 -7.962 -7.951 -7.963 -7.963 -7.959 -7.965 -7.953 -7.958 -7.963 -7.958 -7.957 -7.964 -7.954 -7.946 -7.957 -7.954 -7.955 -7.942 -7.962
The Spreadsheet The screen shot below shows a portion of the spreadsheet using the above data. The air mass data is plotted along the X-axis, while the instrumental magnitudes are plotted on the Y-axis. The observations were made with a C filter. The procedure for any other filter would be identical, save that you’d use the instrumental magnitude obtained in that filter and the air mass at the time for the observation in that filter. 217
218 Appendix D: First-Order (Comp) Example
Slope
Cell E2 shows the slope of the least squares solution. Its formula is SLOPE(B2:B61,A2:A61)
Corr
Cell E3 shows the correlation value of the least squares solution. This gives you the quality of the fit of the data to the solution. A perfect fit has a value of 1.000 (or –1.000). The formula is CORREL(B2:B61,A2:A61)
The trend line for the plot shows the intercept value. It is not used, though it may be of interest since it would be the instrumental magnitude of the star outside the Earth’s atmosphere. Note that the plot has the more positive magnitudes toward the top. This allows finding a positive slope (remember, the first-order extinction term is always positive). However, this is not accurate in the sense that a plot of magnitudes should have brighter values at the top. This would mean the values would get more negative toward the top instead of toward the bottom. The magnitude system and standard X/Y plotting that we learned in school don’t always get along.
Appendix E: Standard Color Indices Once you’ve determined the transforms and, if necessary, the first-order extinction values, you can find the standard color indices of the comparisons and target. These values will be used in the differential formula for finding the standard magnitude of the target in a single color.
The Data Three images were taken of M67, a field with very well known values, in V, R, and C to simulate a target field. Having well-known values allows you to check the results. Of course, when you work a real target field without known values, you won’t have such a check unless you also shoot an independent reference field that’s nearby. The raw instrumental magnitudes were measured using MPO PhotoRed. V Filter Name v1 v2 v3 X ----------------------------------------Comp1 -10.841 -10.839 -10.836 1.212 Comp2 -10.091 -10.091 -10.088 1.212 Comp3 -9.228 -9.228 -9.225 1.212 Comp4 -10.099 -10.098 -10.097 1.212 Target -10.167 -10.163 -10.164 1.212 R Filter Name r1 r2 r3 X ----------------------------------------Comp1 -11.379 -11.348 -11.367 1.215 Comp2 -10.410 -10.410 -10.407 1.215 Comp3 -9.738 -9.736 -9.728 1.215 Comp4 -10.632 -10.632 -10.632 1.215 Target -10.137 -10.130 -10.130 1.215
The Spreadsheet The screen shot below shows the setup for the spreadsheet used to calculate the color indices of the comparisons and target. Block A1:E6 contains the instrumental magnitudes and air mass values for the three V images. Block A8:E13 contains the instrumental magnitudes and air mass values for the three R images. 219
220 Appendix E: Standard Color Indices
Cells B16 and B17 holds the V and R first-order extinction values, found with the modified Hardie method in Appendix E. The “hidden” transforms and zero-point (see page 56) are in cells B18 and 19, respectively. These values are not those from Appendix D, but were found on a different night using a different reference field. They are similar but differ by a fair amount in the zero-point values. Remember that for this particular reduction you do not average the three instrumental magnitudes in a given color. Instead, you’ll compute the color index based on three pairs and then compute the mean and standard deviation. If you did the average first, you’d have a single value for V and R. You could compute the error of each average and propagate those through the process, but this approach is a little easier. Cells B22:B26, titled “CI1,” hold the computed color index values for the four comparisons and target based on using the instrumental magnitudes from column B. Cells C22:C26 and D22:D26 use their respective instrumental magnitudes. Cell B22’s formula is (((B2-B$16*E2) - (B9-B$17*E10)) * B$18) + B$19
Appendix E: Standard Color Indices 221
Note the use of the dollar sign ($) for some of the cell references. This allows copying this cell to B23-26 without having the spreadsheet automatically increment references to the constants in B16:B19. You could also create a name reference to the constant values. This would avoid having to edit the value in cell C22 after pasting a copy of B22, which does increment the references to the cells with constant values. The formulae for C22 and D22 are, respectively, (((C2-B$16*E2) - (C9-B$17*E9)) * B$18) + B$19 (((D2-B$16*E2) - (D9-B$17*E9)) * B$18) + B$19 Cells E22:E26 hold the average value for the three derived color indices in their respective rows. The general formula for each cell is AVERAGE(BX, CX, DX)
X = row number
Cells F22:F26 hold the standard deviation of the mean for each row. The general formula is STDEV(CX, DX, EX) X = row number As you can see, the standard deviations are very low, but that is influenced by having a minimum number of values. How do the derived values compare to the catalog values for the stars? The catalog (V–R) values are shown in cells G22:G26 and the differences, M–C, are in H22:H26. The standard deviation of the errors was 0.001 m. There appears to be a slight systematic error of 0.015 m, meaning that the (V–R) values are a little higher than their catalog values. This is acceptable, especially in light of the fact the final derivation of the standard magnitudes depends on the differences of the color indices. Thus, while systematically high by a small amount, the error between the true and derived differential color index for any one comparison and the target, i.e., (V–R – V–R), will be nearly 0.
Appendix F: Comparison Standard Magnitudes The derivation of the standard magnitudes for the comparisons uses almost the same data as when you found the standard color index of the comparisons and target. Here, you will use the same data for the V and R filters but also add the data for the C filter, which – presumably – was the primary filter for imaging the target field. This allows you to see how well the C to V reduction works and how it matches to the reduction using the V filter.
The Data V Filter Name v1 v2 v3 X ----------------------------------------Comp1 -10.841 -10.839 -10.836 1.212 Comp2 -10.091 -10.091 -10.088 1.212 Comp3 -9.228 -9.228 -9.225 1.212 Comp4 -10.099 -10.098 -10.097 1.212 Target -10.167 -10.163 -10.164 1.212 R Filter Name r1 r2 r3 X ----------------------------------------Comp1 -11.379 -11.348 -11.367 1.215 Comp2 -10.410 -10.410 -10.407 1.215 Comp3 -9.738 -9.736 -9.728 1.215 Comp4 -10.632 -10.632 -10.632 1.215 Target -10.137 -10.130 -10.130 1.215 C Filter Name c1 c2 c3 X -----------------------------------------Comp1 -11.898 -11.899 -11.901 1.217 Comp2 -11.082 -11.079 -11.085 1.217 Comp3 -10.274 -10.283 -10.273 1.217 Comp4 -11.159 -11.158 -11.152 1.217 Target -11.086 -11.075 -11.087 1.217
223
224 Appendix F: Comparison Standard Magnitudes
The Spreadsheet The general layout of the spreadsheet is the same as in Appendix G, save that the C filter has been added. Also, remember that this time you do average the values for the three instrumental magnitudes and use that single value in the reduction formula. You’re still able to find a standard deviation, which gives you an idea of the error within your measurements and reductions. A comparison to the actual catalog values is included in the spreadsheet so that you can see how well the method worked. The reduction formula is really for the comparisons only. The target is included in this exercise to see what value would be derived. In the next appendix, we’ll reduce the target by the differential formula and see how the results compare. Cells A1:E6 hold the V data for the four comps and target. Cells A8:E13 hold the R data while cells A15:E20 hold the C filter data. The first-order extinction and transforms values are stored in rows 22 through 24. Note that the first-order terms are the same as those used in the exercise for the standard color indices (Appendix G). The transforms are from the same run that generated the hidden values used in Appendix G. , ,
Cells F2:F6, F9:F13, and F16:F20 hold the average value of the three instrumental magnitudes for the row. The general formula is AVERAGE(Bx, Cx, Dx)
sd
x = row number
Cells G2:G6, G9:G13, and G16:G20 hold the standard deviation of the average instrumental magnitude. The general formula is
Appendix F: Comparison Standard Magnitudes 225
STDEV(Bx, Cx, Dx)
x = row number
Note that this is not the true error of the derived value, since it does not include the errors in the first-order extinction, transform, and nightly zero-points. However, it does show the relative stability of the measurements. CI
Cells H2:H6, H9:H13, and H16:H20 hold the color index values derived for the comparisons and target found in the color index exercise, Appendix G.
V, R, C
Cells I2:I6, I9:I13, and I16:I20 hold the derived standard magnitudes for the comparisons and target. The general formula for V, R, and C, respectively, are Fx - (B$22*Ex) + (D$22*Hx) + F$22 Fx - (B$23*Ex) + (D$23*Hx) + F$23 Fx - (B$24*Ex) + (D$24*Hx) + F$24 Where “x” is the row number. Note the changes in the references to the cells holding the firstorder (Bxx), transforms (Dxx), and nightly zero-points (Fxx). Again, the dollar sign ($) allowed creating the formula for the first cell, e.g., I2, and then doing a copy/paste to the remaining cells in that column for the filter.
CAT V-CAT R-CAT C-CAT
Cells J2:J6, J9:J13, and J16:J20 hold the catalog values for the stars in M67, taken from the Henden field data.
Cells K2:K6, K9:K13, and K16:K20 hold the differences between the derived magnitude in the given filter and the catalog value. The general formula is Ix–Jx
x = row number
As you can see, this was a good night. The values in Column K are near 0.01m. You hope to get such good results all the time. Note that the C to V reductions very nearly duplicate the catalog as well as the derived V values. Another thing to notice is that the C instrumental magnitudes are, on average, about a magnitude brighter for any comp star or target and about half a magnitude brighter than the red filter images. This shows you how much filters can reduce light and why the C filter is some times the difference between getting data and not.
Appendix G: Target Standard Magnitudes The more rigorous approach to reducing the target raw magnitudes to a standard system depends on differential photometry. This way, the extinction and zeropoint terms do drop out of consideration (assuming you’re working above 30° altitude and/or your field is not a degree or more on a side).
The Data The following is a part of the data taken of a variable star discovered by the author. The JD is JD – 2400000.0. Comp1 Comp2 Comp3 Target JD ----------------------------------------------------7.409 -7.611 -7.413 -7.871 53509.660414 -7.525 -7.716 -7.455 -7.961 53509.664302 -7.689 -7.901 -7.712 -8.144 53509.666234 -7.757 -7.940 -7.756 -8.177 53509.668180 -7.759 -7.947 -7.733 -8.160 53509.670113 -7.755 -7.967 -7.738 -8.177 53509.672057 -7.739 -7.950 -7.708 -8.123 53509.676795 -7.824 -8.024 -7.818 -8.232 53509.678735 -7.827 -8.028 -7.833 -8.221 53509.680668 -7.844 -8.065 -7.832 -8.230 53509.682611 -7.836 -8.030 -7.833 -8.200 53509.684557 -7.837 -8.060 -7.868 -8.216 53509.686500 -7.855 -8.061 -7.861 -8.200 53509.688444 -7.848 -8.054 -7.885 -8.207 53509.692323 -7.901 -8.100 -7.885 -8.220 53509.694267 -7.916 -8.089 -7.890 -8.172 53509.696201 -7.909 -8.099 -7.894 -8.187 53509.698145 -7.908 -8.110 -7.872 -8.160 53509.700089 -7.850 -8.056 -7.845 -8.123 53509.702035 -7.867 -8.123 -7.880 -8.115 53509.709556 -7.917 -8.153 -7.930 -8.153 53509.711479 -7.930 -8.154 -7.974 -8.154 53509.713419 -7.928 -8.151 -7.960 -8.134 53509.715367 -7.903 -8.125 -7.958 -8.114 53509.717312 -7.913 -8.160 -7.952 -8.112 53509.719268 -7.907 -8.157 -7.934 -8.108 53509.721201 -7.931 -8.166 -7.952 -8.123 53509.723144 -7.943 -8.154 -7.978 -8.101 53509.725077 -7.950 -8.174 -7.950 -8.102 53509.727011 -7.933 -8.157 -7.960 -8.093 53509.728955 -7.936 -8.158 -7.971 -8.087 53509.730900 -7.933 -8.163 -7.943 -8.076 53509.732845 -7.888 -8.121 -7.900 -8.039 53509.734789 -7.922 -8.142 -7.916 -8.026 53509.736733
227
228 Appendix G: Target Standard Magnitudes
The Spreadsheet A portion of the spreadsheet is shown in the screen shot above. The block A1:C7 holds the (V–R) color index values found for the three comparisons and target as well as the derived standard magnitudes for the three comparisons. The Tc transform values were based against the (V–R) color index of the reference stars. T/C1
Cells F22:F61 contain the derived standard magnitude based on the differential instrumental magnitudes of Comp1 and the Target as well as the differential of the color indices for the two objects. The general formula is (Ex - Bx) + B$7*(B$5 - B$2) + C$2 Where “x” is the row number. Again note the use of the dollar sign ($), which allows you to enter the formula in F22 as (E22 – B22) + B$7*(B$5 - B$2) + C$2 and then copy/paste the cell into the remaining cells in column F.
T/C2
Cells G22:G61 contain the derived standard magnitude based on the differential instrumental magnitudes of Comp2 and the Target as well as the differential of the color indices for the two objects. The general formula is (Ex - Cx) + B$7*(B$5 - B$3) + C$3 Where “x” is the row number. Note the subtle changes required to use the data for Comp2.
T/C3
Cells H22:H61 contain the derived standard magnitude based on the differential instrumental magnitudes of Comp3 and the Target as well as the differential of the color indices for the two objects. The general formula is (Ex - Dx) + B$7*(B$5 - B$4) + C$4
Appendix G: Target Standard Magnitudes 229
Where “x” is the row number. Note the subtle changes required to use the data for Comp3. Mean
Cells I22:I61 contain the average value of the three derived standard magnitudes for a given row. The general formula is AVERAGE(Fx, Gx, Hx)
S.D.
x = row number
Cells J22:J61 contain the standard deviation of the three derived standard magnitudes for a given row. The general formula is
STDEV(Fx, Gx, Hx)
x = row number
The plot is an X-Y type. Use the values in A22:A61 for the X-axis. Use the values in I22:I61 for the Y-axis. Make sure you invert the Y-axis so that lower numbers (brighter magnitudes) are at the top. The screen shot below shows the entire run as plotted in MPO PhotoRed. Try running the above spreadsheet using the data, as appropriate, from Appendix I. How do the results from the differential process compare to those found for the target in the Appendix I?
Appendix H: Landolt/Graham Standard Fields The Landolt fields are the calibration fields when trying to convert the instrumental magnitudes of your system onto the Johnson–Cousins system. What follows are a number of finder charts and data based on the data from the original paper by Landolt and the LONEOS catalog prepared by Brian Skiff of Lowell Observatory. The files for these can be obtained from the Lowell site ftp://ftp.lowell.edu/pub/bas/starcats Use an anonymous log in. Download the landolt*.* and loneos*.* files. The lists were filtered to remove known or suspected variables and the positions updated to J2000. There are no listed errors, but they are on the order of 0.01 m and less in most cases. The unfortunate side is that the charts had to be made 1° on a side to include a sufficient number of stars. Most amateurs have fields much less than this, usually 20 arcminutes or less. The charts include a square at the center, drawn with a dashed line, that indicates a 20 arcminutes field. With careful work, you can extend the number of stars you use in the transforms determination by shooting more than one area of the field through the various filters while keeping at least one star common to all the subfields that you image.
Graham Fields There are three charts included in this section that are not original Landolt fields. They are located at about –45°. These fields should not be considered true standard fields, since they do have systematic errors of up to 0.02 m. However, they do serve well as secondary standards for those in the Southern Hemisphere since the field transit nearly at the zenith.
231
232 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 233
234 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 235
236 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 237
238 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 239
240 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 241
242 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 243
244 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 245
246 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 247
248 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 249
250 Appendix H: Landolt Standard Fields
Appendix H: Landolt Standard Fields 251
252 Appendix H: Landolt Standard Fields
Appendix I: Henden Charts The following pages contain finder charts for fields where Arne Henden of the U.S. Naval Observatory – Flagstaff (now Director, AAVSO), has done highprecision photometry. The fields are distributed about the sky but, unfortunately, favor northern observers. The fields cannot be considered standard stars but “near secondary standards.” For truly accurate transforms to a standard system, you should use fields that helped define the system, e.g., the Landolt series. Still, these fields can provide a high degree of accuracy and be used by collaborations to reference all measurements against the same field. Each chart is 0.5° on a side. The magnitude scaling has been exaggerated some so that faint stars are not lost. With the scaling used, naked eyes stars would be very large! However, the scaling does allow the brighter stars in the sequences to be quickly located on an image, which is the main goal. Up to 26 stars, labeled “A” through “Z”, are indicated on the chart. Only stars from the Henden sequences are labeled. The other stars on the chart are either in the sequence but too faint to be labeled or part of the MPO Star Catalog. The latter was used to include a sufficient number of additional field stars so that the field could be readily identified. Below each chart is a table that lists the RA/Declination of chart center, the name of the file from which the data was taken, the date of the file – so you know which version of the data was used, and the data for the labeled stars. The columns are — Label Star Name RA Declination
B Magnitude V Magnitude R Magnitude
V Error (B–V) Error (V–R) Error
If a given magnitude is empty, there was no value available from the Henden sequence. When building the charts, there were several requirements. 1. No star was used for which there were fewer than three observations. 2. All magnitudes are the actual values from the data files. There was no conversion to R or (V–R) based on B/V magnitudes. 3. The error is not shown for a given magnitude band if the data value assigned a value greater than 9 or less than 0. You should not use the magnitude for a star if there is no error or if the error is significant.
253
254 Appendix I: Henden Charts
Close but not Quite Let me repeat something said above: these fields will give you a high degree of accuracy but – at best – using them to determine your transforms will get you close, but not necessarily on, the standard system. For truly accurate transforms, use the Landolt fields. Finder charts for some of the Landolt fields are available in the previous appendix. The charts indicate the name and data of the file from which the data was taken. It’s entirely possible the data has been updated since the chart was created. You should check Arne’s ftp site frequently for new or updated files. The URL is ftp://ftp.nofs.navy.mil/pub/outgoing/aah/sequence/ Be aware that these fields have not been followed long enough to assure that none of the stars is variable. If you use the fields, make sure to use as large a number of stars as is practical so that you’ll have enough should you need to remove one or more stars because they are variable. Should you report observations after using these fields to calibrate your system, be sure to include a comment some where in your report that indicates which field was used, the date of the file from which the data was taken, and the stars you used to make the calibration. This allows you to correct your data should new photometry become available.
Appendix I: Henden Charts 255
256 Appendix I: Henden Charts
Appendix I: Henden Charts 257
258 Appendix I: Henden Charts
Appendix I: Henden Charts 259
260 Appendix I: Henden Charts
Appendix I: Henden Charts 261
262 Appendix I: Henden Charts
Appendix I: Henden Charts 263
264 Appendix I: Henden Charts
Appendix I: Henden Charts 265
266 Appendix I: Henden Charts
Appendix I: Henden Charts 267
268 Appendix I: Henden Charts
Appendix I: Henden Charts 269
270 Appendix I: Henden Charts
Appendix I: Henden Charts 271
272 Appendix I: Henden Charts
Appendix I: Henden Charts 273
274 Appendix I: Henden Charts
Appendix I: Henden Charts 275
276 Appendix I: Henden Charts
Appendix I: Henden Charts 277
278 Appendix I: Henden Charts
Appendix I: Henden Charts 279
280 Appendix I: Henden Charts
Appendix I: Henden Charts 281
282 Appendix I: Henden Charts
Appendix I: Henden Charts 283
284 Appendix I: Henden Charts
Appendix J: Hipparcos Blue–Red Pairs Blue–Red pairs are used to determine second-order extinction, usually to adjust derived magnitudes for B and C filters. The second-order correction for V, R, and I is usually small and can be ignored in all but the most critical cases. Using the Hipparcos Catalog and the criteria below, several dozen pairs of stars were found. The magnitudes have been reduced from the Hipparcos to the Johnson–Cousins BVR system using formulae in the ESA documentation and elsewhere. The values should be sufficiently close to "true" magnitudes or finding second-order extinction terms. If using only the B and V magnitudes, then the pairs can also be used as secondary standards for finding transforms using the (B–V) color index. The derived R values are probably of insufficient accuracy to use the (V–R) color index and R transforms. It would make a good project to determine the quality of the R magnitudes by back-checking derived R magnitudes in Landolt fields using the Hipparcos stars.
Steps used to produce the List 1. The Hipparcos file of approximately 118,000 stars was scanned to find all stars with in the range of –0.2 < (B–V)T < 1.8. This range is the same as that in the Hipparcos documentation, where reasonable conversions of Bt and Vt to Bj and Vj can be made. 2. The V magnitudes were derived by using the Hp magnitude and performing a linear interpolation using the (V–I) magnitude from the catalog against one of two lookup tables found on page 67. The tables differ depending on spectral class, with late G, K, M dwarfs being treated differently from O-G5 (II-V) and G5III - M8III stars. 3. The value for (B–V)j came directly from the Hipparcos catalog (246-252). 4. The derivation of R magnitudes was made based on a linear solution found by Arne Henden: Rc = Vt – 0.014 – (0.5405 * B–V) 5. Once all stars within the (B–V)t range were found, they were put into separate lists of blue and red stars, with an arbitrary standard of (B–V)j < 0.1 for blue and (B–V)j > 0.8 being the dividing points. This facilitated the search to find blue–red pairs by iterating through the red stars and searching for a close blue star. The red and blue stars were considered a valid pair based on the following criteria: 285
286 Appendix J: Hipparcos Blue–Red Pairs
1. The separation between the two stars was 2 = X = 10 arcminutes. 2. The average declination of the star was –30 = D = +30 degrees. This helps assure that a pair can be found that goes through a significant change in air mass over a few hours' time from about any latitude. 3. The (Rbv–Bbv) difference was = 0.8 m. 4. The variability and proximity flags were not set or empty. 6. Blue–Red pairs found were written out to a text file with three lines per pair. 1. The first line gives the average J2000 coordinates of the pair plus the separation in arcminutes. 2. The second line gives the data for the BLUE star, which includes the coordinates, HIP number, and B, V, R, (B–V), and (V–R) Johnson– Cousins magnitudes. 3. The third line gives the same data for the RED star. With more than 100 pairs, it is not possible to include finder charts for these stars. It may be difficult to use some of these pairs, especially if you have a larger telescope. Remember that very short exposures can be on non-linear portions of the chip response curve, and short exposures are also subject to scintillation noise. Usually, you want to use exposures on the order of ten seconds to avoid that problem. Even with filters, it’s unlikely that you’ll be able to expose that long with stars of 6th magnitude. If these stars are too bright, you can try stopping down the telescope with an off-axis mask, which does not affect the derived values, or you can use the stars from the Sloan Digital Sky Survey in the next appendix. The R magnitudes in the SDSS catalog are probably more accurate, so try to use that catalog when working with R magnitudes.
Appendix J: Hipparcos Blue–Red Pairs 287
Hipparcos Blue–Red Pairs RA
Dec.
HIP
B
V
R
(B-V)
(V-R)
00:25:59 -21:39:07 00:25:42 -21:37:53 00:26:16 -21:40:21
8.7 2027 2079
7.695 8.847
7.640 7.567
7.574 6.867
0.056 1.280
0.065 0.700
01:15:40 +20:27:30 01:15:28 +20:24:52 01:15:53 +20:30:08
8.1 5878 5906
7.071 8.257
6.985 7.244
6.926 6.689
0.086 1.013
0.059 0.555
02:02:48 -21:56:19 02:02:35 -21:57:56 02:03:02 -21:54:42
7.4 9534 9577
6.866 8.767
6.972 7.620
6.995 -0.106 -0.022 6.984 1.148 0.635
02:16:58 - 6:30:00 02:16:57 - 6:34:41 02:16:58 - 6:25:18
9.3 10640 10642
7.360 6.465
7.304 5.504
7.271 4.991
02:48:45 +25:07:56 02:48:45 +25:11:17 02:48:45 +25:04:36
6.6 13121 13120
5.860 8.499
5.894 7.465
5.896 -0.033 -0.001 6.923 1.035 0.541
03:54:29 + 9:12:46 03:54:45 + 9:10:39 03:54:14 + 9:14:53
8.9 18297 18252
7.492 9.722
7.424 8.719
7.369 8.198
0.069 1.004
0.054 0.520
03:58:24 -23:52:03 03:58:34 -23:47:43 03:58:15 -23:56:23
9.9 18575 7.980 18553 10.282
7.919 9.111
7.870 8.483
0.061 1.171
0.049 0.628
04:24:33 -21:48:19 04:24:41 -21:52:21 04:24:25 -21:44:17
8.9 20596 8.742 20572 10.632
8.764 9.690
8.757 -0.021 9.155 0.942
0.006 0.535
04:48:42 + 3:37:08 04:48:39 + 3:38:57 04:48:44 + 3:35:18
3.8 22343 22354
7.267 7.239
7.324 6.040
7.339 -0.057 -0.014 5.383 1.200 0.656
05:20:22 - 5:50:03 05:20:07 - 5:50:46 05:20:37 - 5:49:21
7.5 24891 24944
8.127 8.948
8.177 7.995
8.166 -0.050 7.483 0.953
0.011 0.512
05:26:10 -12:53:21 05:26:16 -12:52:25 05:26:04 -12:54:17
3.3 25426 25407
9.090 8.455
9.071 7.150
9.062 6.435
0.009 0.714
05:42:41 +18:59:02 05:42:53 +18:58:49 05:42:28 +18:59:15
6.2 26925 26886
6.644 8.673
6.659 7.333
6.663 -0.015 -0.003 6.533 1.340 0.799
05:50:31 + 1:44:21 05:50:24 + 1:46:43 05:50:38 + 1:41:58
5.9 27574 27602
9.131 9.036
9.101 7.990
9.136 7.393
0.031 -0.035 1.047 0.596
05:55:49 +12:58:56 05:56:00 +12:57:46 05:55:39 +13:00:06
5.8 28064 28029
8.164 8.697
8.165 7.692
8.145 7.163
0.000 1.006
06:00:29 - 7:31:46 06:00:27 - 7:35:23 06:00:31 - 7:28:10
7.2 28453 28459
8.277 8.677
8.361 7.418
8.380 -0.083 -0.019 6.688 1.260 0.729
06:06:45 -22:07:08 06:06:34 -22:08:41 06:06:56 -22:05:35
6.4 28944 7.984 28983 10.140
8.033 8.774
8.017 -0.048 8.052 1.367
0.056 0.962
0.019 1.306
0.033 0.512
0.019 0.528
0.015 0.721
288 Appendix J: Hipparcos Blue–Red Pairs RA
Dec.
HIP
B
V
R
(B-V)
(V-R)
06:07:39 + 8:14:43 06:07:27 + 8:16:14 06:07:52 + 8:13:13
6.9 29027 29063
7.943 9.839
7.983 8.973
7.993 -0.040 -0.009 8.483 0.867 0.489
06:21:32 +21:13:14 06:21:22 +21:11:59 06:21:43 +21:14:28
5.8 30211 30241
7.678 9.938
7.623 8.531
7.572 7.709
0.056 1.408
0.050 0.821
06:23:38 - 4:42:29 06:23:22 - 4:41:14 06:23:53 - 4:43:43
8.0 30387 30430
6.726 8.326
6.664 7.320
6.600 6.755
0.063 1.007
0.063 0.564
06:41:14 +24:01:16 06:41:12 +24:05:03 06:41:15 +23:57:30
7.5 32004 32010
8.679 9.097
8.662 8.077
8.640 7.542
0.017 1.021
0.022 0.534
06:42:57 -19:24:50 06:42:48 -19:21:16 06:43:06 -19:28:24
8.5 32147 32179
8.975 9.686
8.980 8.708
8.960 -0.005 8.194 0.979
0.020 0.513
06:48:24 - 4:45:34 06:48:23 - 4:42:07 06:48:25 - 4:49:01
6.9 32630 32633
7.725 9.371
7.645 8.386
7.581 7.824
0.081 0.986
0.063 0.561
07:05:35 +11:15:45 07:05:48 +11:13:26 07:05:22 +11:18:04
7.8 34231 34189
7.710 9.790
7.720 8.864
7.691 -0.010 8.395 0.927
0.029 0.468
07:06:19 + 6:07:49 07:06:33 + 6:08:27 07:06:05 + 6:07:11
7.0 34292 34260
8.244 9.124
8.250 8.105
8.227 -0.006 7.552 1.019
0.023 0.553
07:25:36 + 4:30:14 07:25:28 + 4:33:43 07:25:45 + 4:26:46
8.1 36031 36049
8.614 8.244
8.530 7.140
8.467 6.518
0.063 0.622
07:28:35 -24:09:19 07:28:19 -24:10:10 07:28:51 -24:08:28
8.2 36300 8.391 36347 11.113
8.449 9.974
8.456 -0.058 -0.006 9.549 1.140 0.424
07:43:34 - 4:41:39 07:43:32 - 4:40:50 07:43:37 - 4:42:28
2.0 37647 37655
7.054 7.832
7.138 6.912
7.172 -0.084 -0.033 6.409 0.920 0.503
07:51:46 -18:19:25 07:51:45 -18:21:27 07:51:47 -18:17:23
4.1 38379 38383
7.617 9.594
7.621 8.559
7.590 -0.003 8.001 1.035
08:00:42 +12:39:36 08:00:49 +12:40:42 08:00:36 +12:38:30
3.8 39183 39164
6.727 7.686
6.793 6.615
6.794 -0.065 -0.001 6.028 1.071 0.587
08:13:28 -22:22:09 08:13:13 -22:23:51 08:13:43 -22:20:27
8.2 40248 40295
9.099 9.906
9.069 8.635
8.991 7.939
0.030 1.271
0.078 0.696
08:31:21 - 9:23:01 08:31:13 - 9:26:25 08:31:30 - 9:19:38
7.9 41789 9.211 41814 10.262
9.128 9.189
9.066 8.554
0.083 1.073
0.062 0.635
08:40:08 +19:59:22 08:40:11 +19:58:16 08:40:06 +20:00:28
2.5 42523 42516
6.604 6.384
6.592 5.847
0.006 0.980
0.012 0.536
08:51:20 +11:46:19 08:51:11 +11:45:22 08:51:29 +11:47:16
4.9 43465 9.966 10.036 43491 11.047 9.698
9.940 -0.070 8.933 1.350
0.096 0.764
6.610 7.363
0.084 1.104
0.030 0.558
Appendix J: Hipparcos Blue–Red Pairs 289 RA
Dec.
HIP
B
V
R
(B-V)
(V-R)
09:01:36 -14:27:29 09:01:33 -14:29:28 09:01:39 -14:25:31
4.2 44320 8.354 44328 10.413
8.332 9.431
8.295 8.872
0.022 0.982
0.037 0.559
09:42:38 -14:03:10 09:42:33 -13:58:44 09:42:43 -14:07:37
9.1 47616 47634
7.863 9.855
7.856 8.788
7.829 8.221
0.007 1.067
0.027 0.567
10:01:29 -15:26:22 10:01:22 -15:27:14 10:01:37 -15:25:29
4.1 49110 49127
7.980 9.669
7.962 8.653
7.939 8.191
0.019 1.016
0.022 0.462
12:48:02 +13:29:18 12:48:14 +13:33:11 12:47:51 +13:25:26
9.6 62478 62442
6.491 9.042
6.476 8.053
6.457 7.515
0.015 0.990
0.019 0.537
12:47:53 -24:56:00 12:47:53 -24:51:06 12:47:53 -25:00:55
9.8 62448 62447
6.370 7.850
6.428 6.806
6.433 -0.058 -0.004 6.244 1.045 0.561
15:06:16 +28:59:30 15:06:28 +28:59:02 15:06:04 +28:59:58
6.0 73931 9.089 73887 10.314
9.226 9.407
9.243 -0.136 -0.017 8.909 0.908 0.497
15:38:45 -19:45:24 15:39:00 -19:43:57 15:38:30 -19:46:52
7.9 76633 7.685 76589 10.201
7.639 8.931
7.598 8.261
0.047 1.270
0.040 0.670
16:22:01 + 0:31:50 16:22:12 + 0:29:53 16:21:50 + 0:33:46
6.8 80184 80163
7.794 9.424
7.698 8.360
7.627 7.776
0.097 1.065
0.070 0.583
16:46:42 + 2:15:58 16:46:46 + 2:12:34 16:46:37 + 2:19:23
7.1 82133 82126
8.789 8.874
8.869 7.900
8.861 -0.079 7.407 0.975
0.007 0.492
17:03:48 +13:35:11 17:03:39 +13:36:19 17:03:58 +13:34:03
5.1 83478 83504
5.924 7.105
5.915 6.056
5.902 5.495
0.010 1.050
0.012 0.560
17:46:32 + 6:10:41 17:46:36 + 6:07:14 17:46:28 + 6:14:08
7.1 86993 86977
7.727 8.937
7.753 7.904
7.747 -0.026 7.335 1.034
0.006 0.568
18:44:32 +26:12:53 18:44:50 +26:11:55 18:44:14 +26:13:51
9.1 91977 91914
7.976 9.223
7.926 8.198
7.877 7.651
0.051 1.025
0.048 0.547
18:53:45 +15:15:45 18:53:57 +15:15:35 18:53:33 +15:15:56
5.8 92741 8.612 92718 10.894
8.515 9.567
8.499 8.866
0.097 1.328
0.016 0.700
19:01:10 +26:25:42 19:00:56 +26:27:39 19:01:24 +26:23:44
7.9 93357 93407
8.145 9.021
8.128 7.881
8.118 7.259
0.017 1.140
0.010 0.622
19:40:43 +23:43:28 19:40:39 +23:43:04 19:40:47 +23:43:52
2.1 96801 96818
6.632 9.145
6.642 8.208
6.643 -0.009 -0.001 7.692 0.937 0.516
19:43:55 - 1:58:57 19:44:04 - 2:00:22 19:43:46 - 1:57:32
5.4 97107 97082
8.497 9.469
8.442 8.363
8.393 7.739
0.056 1.107
0.048 0.623
19:59:29 + 4:00:08 19:59:10 + 3:59:33 19:59:47 + 4:00:43
9.3 98374 98417
9.205 9.668
9.153 8.550
9.132 7.889
0.052 1.118
0.021 0.661
290 Appendix J: Hipparcos Blue–Red Pairs RA
Dec.
20:11:49 +26:51:09 20:11:50 +26:53:45 20:11:47 +26:48:32
HIP
B
V
R
(B-V)
(V-R)
5.2 99520 99518
7.201 6.905
7.290 5.509
7.305 -0.089 -0.014 4.742 1.397 0.766
20:30:29 +27:54:44 9.7 20:30:13 +27:51:58 101152 20:30:45 +27:57:30 101198
7.752 9.226
7.815 8.093
7.808 -0.062 7.482 1.133
0.006 0.611
20:32:32 -28:35:54 5.1 20:32:42 -28:35:43 101367 20:32:21 -28:36:06 101340
7.375 9.596
7.316 8.549
7.272 8.025
0.060 1.047
0.043 0.524
20:36:28 +16:45:50 8.6 20:36:40 +16:42:48 101687 20:36:15 +16:48:52 101645
8.421 8.013
8.339 6.614
8.280 5.821
0.083 1.400
0.058 0.792
20:45:42 +22:58:09 5.9 20:45:45 +22:55:15 102461 20:45:40 +23:01:03 102454
7.729 8.743
7.704 7.756
7.680 7.231
0.026 0.988
0.023 0.524
20:56:31 - 3:37:05 9.4 20:56:18 - 3:33:42 103347 20:56:44 - 3:40:28 103384
6.503 8.336
6.582 7.265
6.587 -0.078 -0.004 6.657 1.071 0.608
21:02:39 +21:44:43 8.7 21:02:48 +21:48:36 103870 21:02:31 +21:40:51 103843
7.483 9.023
7.532 7.947
7.533 -0.048 -0.001 7.327 1.076 0.620
21:31:13 +28:20:38 3.4 21:31:17 +28:19:18 106253 21:31:09 +28:21:58 106241
9.736 9.309
9.746 8.206
9.835 -0.010 -0.088 7.618 1.104 0.587
21:59:14 -23:51:42 3.6 21:59:17 -23:50:00 108542 21:59:12 -23:53:24 108532
7.060 9.551
7.035 8.325
6.998 7.662
0.026 1.227
0.036 0.662
22:13:33 +21:05:19 5.8 22:13:40 +21:02:58 109732 8.068 22:13:25 +21:07:39 109718 10.588
8.076 9.646
8.059 -0.007 9.196 0.942
0.016 0.450
22:16:01 +11:46:41 2.1 22:16:00 +11:45:35 109942 22:16:01 +11:47:46 109946
7.252 9.459
7.302 8.285
7.286 -0.049 7.630 1.174
0.015 0.655
22:45:29 + 3:39:37 5.3 22:45:37 + 3:37:52 112376 22:45:21 + 3:41:23 112347
7.852 8.685
7.895 7.582
7.902 -0.042 -0.007 6.969 1.103 0.613
23:35:09 +16:27:05 3.8 23:35:12 +16:25:23 116400 23:35:05 +16:28:47 116391
8.943 9.131
8.939 7.807
8.903 7.046
0.004 1.325
0.036 0.760
Appendix K: SDSS Blue–Red Pairs The blue–red pairs from the Hipparcos Catalog may be too bright for those using larger scopes and/or the clear filter. Furthermore the derivation of the R magnitudes is not as certain. The following list was created by using the on-line data query utility for the Sloan Digital Sky Survey. The search was limited to ±5° declination and stars with a B magnitude between 10.0 and 14.0 The conversion of the SDSS magnitudes to the Johnson–Cousins system was based on the method by Jester et al as outlined on the SDSS web site at http://www.sdss.org/dr4/algorithms/sdssUBVRITransform.html In general, the RMS errors are (B–V) (V–R) B V R
0.04 0.03 0.03 0.01 0.03
The first line in each set is the average J2000 RA and declination and separation in arcminutes. The second line is the data for the blue star, while the third line gives the data for the red star. RA
Dec.
B
V
R
(B-V)
(V-R)
00:55:21.56 00:55:04.23 00:55:38.90
+00:56:12.7 +00:57:44.6 +00:54:40.9
9.2 13.869 13.015
13.767 12.164
12.818 11.917
0.103 0.852
0.948 0.246
01:06:48.70 01:06:59.34 01:06:38.07
+00:46:42.3 +00:50:49.6 +00:42:35.1
9.8 13.883 13.042
13.799 12.209
12.844 12.108
0.084 0.833
0.955 0.101
02:43:01.84 02:43:09.85 02:42:53.84
+00:14:03.6 +00:12:55.9 +00:15:11.3
4.6 12.159 13.924
11.961 13.085
11.957 12.899
0.198 0.840
0.004 0.186
02:43:07.22 02:43:09.85 02:43:04.60
+00:13:01.0 +00:12:55.9 +00:13:06.0
1.3 12.159 13.704
11.961 12.797
11.957 12.562
0.198 0.907
0.004 0.235
02:43:24.97 02:43:09.85 02:43:40.09
+00:13:37.7 +00:12:55.9 +00:14:19.5
7.7 12.159 13.457
11.961 12.583
11.957 12.557
0.198 0.874
0.004 0.026
02:57:13.73 02:57:04.98 02:57:22.48
+01:01:23.9 +00:59:00.1 +01:03:47.7
6.5 13.804 13.460
13.617 12.585
12.689 12.314
0.186 0.875
0.929 0.271
291
292 Appendix K: SDSS Blue–Red Pairs RA
Dec.
B
V
R
(B-V)
(V-R)
03:09:06.82 03:09:13.82 03:08:59.83
-01:10:52.4 -01:08:41.2 -01:13:03.5
5.6 11.993 12.919
12.039 12.101
11.714 11.796
-0.046 0.817
0.325 0.305
03:09:33.04 03:09:13.82 03:09:52.25
-01:08:09.8 -01:08:41.2 -01:07:38.4
9.7 11.993 12.629
12.039 11.813
11.714 11.584
-0.046 0.816
0.325 0.229
03:56:13.82 03:56:09.04 03:56:18.61
+00:29:50.8 +00:27:02.4 +00:32:39.2
6.1 13.368 13.723
13.485 12.724
13.005 12.402
-0.117 0.999
0.479 0.321
08:26:32.05 08:26:29.05 08:26:35.05
+02:51:16.6 +02:54:31.9 +02:48:01.2
6.7 12.856 13.360
12.986 12.535
12.889 12.353
-0.129 0.825
0.097 0.181
08:39:56.13 08:40:07.85 08:39:44.40
+00:46:15.2 +00:43:51.3 +00:48:39.2
7.6 13.391 13.863
13.201 12.849
12.380 12.544
0.190 1.013
0.821 0.306
08:40:59.35 08:41:03.19 08:40:55.50
+00:49:03.0 +00:50:15.5 +00:47:50.5
3.1 13.762 13.673
13.803 12.839
12.837 12.616
-0.041 0.834
0.966 0.223
08:41:02.15 08:41:03.19 08:41:01.11
+00:46:11.6 +00:50:15.5 +00:42:07.6
8.1 13.762 13.957
13.803 13.089
12.837 12.851
-0.041 0.869
0.966 0.237
08:55:24.03 08:55:21.85 08:55:26.22
+00:54:31.0 +00:52:17.7 +00:56:44.3
4.6 13.751 13.464
13.755 12.314
12.846 11.831
-0.004 1.151
0.909 0.482
08:55:37.89 08:55:21.85 08:55:53.94
+00:54:27.5 +00:52:17.7 +00:56:37.4
9.1 13.751 13.638
13.755 12.832
12.846 12.608
-0.004 0.806
0.909 0.224
09:10:26.55 09:10:26.85 09:10:26.24
+00:24:53.6 +00:24:08.0 +00:25:39.2
1.5 13.847 13.327
14.016 12.473
13.224 12.250
-0.168 0.854
0.792 0.223
09:16:44.92 09:16:56.59 09:16:33.25
+00:18:32.9 +00:21:51.3 +00:15:14.5
8.8 13.845 13.454
13.693 12.536
12.879 12.258
0.152 0.917
0.813 0.278
09:19:32.68 09:19:41.49 09:19:23.86
+00:46:41.7 +00:50:19.7 +00:43:03.7
8.5 13.792 13.410
13.768 12.374
13.736 12.243
0.025 1.035
0.031 0.132
09:24:16.92 09:24:12.50 09:24:21.35
+00:15:16.3 +00:14:39.5 +00:15:53.1
2.5 13.531 13.713
13.469 12.784
12.750 12.590
0.062 0.929
0.719 0.194
09:27:02.38 09:27:21.77 09:26:42.99
+00:25:39.1 +00:24:42.5 +00:26:35.7
9.9 13.845 13.580
13.891 12.749
13.032 12.510
-0.046 0.831
0.859 0.238
09:27:12.11 09:27:21.77 09:27:02.45
+00:25:10.5 +00:24:42.5 +00:25:38.6
4.9 13.845 13.155
13.891 12.173
13.032 11.807
-0.046 0.982
0.859 0.366
Appendix K: SDSS Blue–Red Pairs 293 RA
Dec.
B
V
R
(B-V)
(V-R)
09:27:37.41 09:27:21.77 09:27:53.04
+00:21:45.4 +00:24:42.5 +00:18:48.4
9.8 13.845 12.814
13.891 11.946
13.032 11.572
-0.046 0.868
0.859 0.373
09:39:59.40 09:40:01.86 09:39:56.94
+02:44:40.3 +02:46:58.2 +02:42:22.4
4.8 12.635 12.959
12.459 12.094
11.496 11.987
0.176 0.865
0.963 0.107
09:44:39.73 09:44:47.79 09:44:31.67
+00:56:38.8 +00:59:05.9 +00:54:11.7
6.3 13.527 13.527
13.520 12.512
13.105 12.190
0.006 1.015
0.416 0.322
09:48:49.48 09:48:35.42 09:49:03.54
+01:13:46.4 +01:12:57.1 +01:14:35.8
7.2 12.775 13.322
12.905 12.489
12.901 12.176
-0.130 0.833
0.004 0.313
10:49:56.31 10:49:55.92 10:49:56.70
+03:21:19.3 +03:18:54.4 +03:23:44.2
4.8 12.961 12.523
12.762 11.690
11.904 11.378
0.199 0.833
0.859 0.312
11:27:39.87 11:27:52.64 11:27:27.09
+03:24:41.8 +03:27:11.7 +03:22:11.9
8.1 12.375 12.378
12.356 11.456
11.674 11.047
0.019 0.922
0.682 0.409
11:30:07.51 11:30:15.39 11:29:59.63
+00:38:04.8 +00:35:37.7 +00:40:31.9
6.3 13.728 13.448
13.846 12.588
13.503 12.278
-0.119 0.860
0.343 0.310
12:02:50.71 12:02:50.43 12:02:50.98
+00:12:08.1 +00:09:55.3 +00:14:20.9
4.4 13.976 13.678
13.852 12.788
13.700 12.472
0.125 0.890
0.152 0.315
12:39:30.16 12:39:43.47 12:39:16.85
-03:00:26.9 -03:02:16.7 -02:58:37.1
7.6 12.797 13.369
12.697 12.517
11.731 12.368
0.100 0.851
0.965 0.149
12:46:59.35 12:46:51.49 12:47:07.21
+00:12:45.1 +00:08:31.6 +00:16:58.6
9.3 12.985 13.271
12.844 12.095
12.175 11.549
0.141 1.176
0.669 0.546
12:47:02.56 12:46:51.49 12:47:13.62
+00:09:56.0 +00:08:31.6 +00:11:20.4
6.2 12.985 13.656
12.844 12.564
12.175 12.206
0.141 1.092
0.669 0.359
14:29:33.08 14:29:38.60 14:29:27.56
+03:01:21.5 +03:02:16.3 +03:00:26.7
3.3 13.117 13.498
12.963 12.426
12.070 12.110
0.153 1.072
0.893 0.317
14:30:44.55 14:30:49.39 14:30:39.72
+00:12:32.9 +00:08:05.8 +00:17:00.0
9.2 12.988 13.328
12.850 12.419
12.087 12.009
0.138 0.909
0.763 0.410
14:34:00.20 14:34:04.42 14:33:55.99
+05:03:02.9 +05:06:43.9 +04:59:21.9
7.7 13.189 13.409
12.996 12.277
12.180 12.004
0.193 1.132
0.816 0.274
14:52:57.35 14:52:58.62 14:52:56.07
+00:12:31.0 +00:07:57.6 +00:17:04.4
9.1 13.471 12.696
13.357 11.809
12.360 11.327
0.114 0.887
0.997 0.482
294 Appendix K: SDSS Blue–Red Pairs RA
Dec.
B
V
R
(B-V)
(V-R)
14:53:05.59 14:52:58.62 14:53:12.57
+00:11:22.3 +00:07:57.6 +00:14:47.0
7.7 13.471 13.686
13.357 12.494
12.360 11.953
0.114 1.192
0.997 0.541
15:09:33.15 15:09:35.55 15:09:30.75
+03:09:06.2 +03:10:01.3 +03:08:11.1
2.2 12.686 13.470
12.732 12.332
11.962 12.005
-0.047 1.138
0.770 0.327
15:09:46.44 15:09:35.55 15:09:57.33
+03:09:41.3 +03:10:01.3 +03:09:21.2
5.5 12.686 12.377
12.732 11.545
11.962 11.291
-0.047 0.832
0.770 0.254
15:09:47.20 15:09:35.55 15:09:58.85
+03:10:35.5 +03:10:01.3 +03:11:09.7
5.9 12.686 13.296
12.732 12.259
11.962 11.979
-0.047 1.037
0.770 0.279
15:14:49.29 15:15:03.06 15:14:35.52
+00:48:40.0 +00:52:16.1 +00:45:03.9
10.0 13.579 13.726
13.601 12.860
12.677 12.618
-0.022 0.867
0.924 0.242
15:19:43.79 15:19:56.59 15:19:31.00
+00:11:37.6 +00:08:48.9 +00:14:26.4
8.5 13.127 13.231
13.108 12.431
12.174 12.075
0.019 0.800
0.934 0.356
15:20:05.87 15:19:56.59 15:20:15.16
+00:12:10.0 +00:08:48.9 +00:15:31.2
8.2 13.127 12.126
13.108 11.033
12.174 10.506
0.019 1.093
0.934 0.528
15:23:46.74 15:23:55.85 15:23:37.63
-00:29:45.2 -00:25:39.8 -00:33:50.6
9.4 11.294 13.418
11.119 12.344
11.019 12.071
0.176 1.073
0.099 0.274
15:38:26.49 15:38:09.20 15:38:43.79
+02:29:29.5 +02:28:25.4 +02:30:33.5
8.9 12.802 12.599
12.603 11.489
11.914 11.137
0.199 1.110
0.689 0.353
20:49:20.93 20:49:19.44 20:49:22.42
-05:15:51.9 -05:13:33.2 -05:18:10.5
4.7 12.723 13.287
12.534 12.285
12.511 12.118
0.189 1.002
0.023 0.167
20:49:27.30 20:49:19.44 20:49:35.17
-05:09:42.9 -05:13:33.2 -05:05:52.5
8.6 12.723 11.955
12.534 11.024
12.511 10.535
0.189 0.931
0.023 0.488
23:24:21.79 23:24:24.37 23:24:19.20
+00:06:44.6 +00:06:49.8 +00:06:39.4
1.3 13.531 12.833
13.356 11.959
12.651 11.710
0.175 0.874
0.705 0.249
Index A Asteroids shape modeling, 7 tracking during observing run, 120 Why work, 7
B Bias Frames in photometry, 36 Binary Maker, 161 Binning, 190
C Camera Considerations anti-blooming vs. non-antiblooming, 90 download speed, 93 Field of View (FOV), 89 focal reducers, 89 front- vs. back-illuminated, 91 pixel size, 88 software compatibility, 94 support (technical), 95 temperature regulation, 94 Comparison Stars checking for variability, 129
D Dark Frames creating masters, 121 getting in observing run, 118 in photometry, 37
F Filter wheels, 96 Flat fields creating masters, 121
Flat Fields all-sky, 40 dome flats, 40 getting in observing run, 118 in photometry, 37 light boxes, 40 twilight flats, 39
G Guiding Considerations, 96
H Harris, Alan W., 105, 140, 142, 148 Henden Sequences charts using, 253 finding transforms, 26 Henden–Kaitchuck, 21
I Image Acquisition Software, 99 camera control, 99 Considerations ease of use, 99 one or multiple programs?, 100 telescope control, 99 Instrumental Magnitudes vs. standard, 27
K Koff, Bob, 113
L Landolt Standard Magnitudes charts using. See Lightcurves Merging Data Clear to Johnson V transform, 50 295
296 Index using an arbitrary standard, 71 modeling a binary system from, 161 Period Analysis, 137 Aliases, 145–54 bimodal curves, 139 finding the amplitude, 144 level of precision, 142 using a spreadsheet, 156 Variable Stars effect of changing mass ratio, 171 effect of orbital inclination, 16, 166 effect of orbital shape, 15 effect of temperature changes, 168 limb darkening, 17, 172 normalized flux data, 162 reflection effect, 16, 172
M Manual vs. Automated full automation, 109 manual, 109 measuring images, 125 Measuring Apertures object and sky, 41 shape, 45 size, 41
O Observing Programs data mining, 112 Extinction Observations comp star, 62 Hardie, 58 Selecting Targets, 113 Observing Run darks and flats, 118 Exposures how long and often, 119 tracking asteroids, 120 transform and extinction images, 119
P Peranso, 105 Photometry Clear to Johnson V transform, 50 Fundamentals air mass & extinction, 28 all-sky photometry, 32 aperture vs. PSF photometry, 45
bias frames, 36 dark frames, 37 differential photometry, 32 Extinction comp star method, 30 Hardie method, 30 flat fields, 37 measuring apertures, 41 pixel size vs. seeing, 36 seeing, 35 signal-to-noise (SNR), 33 tranforms & zero-point, 31 History, 21 Johnson–Cousins, 23 photographic colors, 22 using an arbitrary standard, 71 Photometry Software available programs, 102 Considerations accuracy, 103 data exchange, 106 ease of use, 102 multiple sessions, 104 period analysis, 105 plotting, 106 reducing to standard magnitudes, 105 supported catalogs, 104 time of minimum (TOM) calculator, 108 zero-point adjustment, 107 Pixel Size, 36, 88 Pravec, Petr, 115
R Romanishin, Bill, 21, 104, 183
S Sessions definition of, 123 required & suggested data, 123–25 Signal-to-Noise (SNR) accuracy in photometry, 33 value vs. precision, 34 Skiff, Brian, 26, 152, 231 Software Vendors Axiom Research (Mira), 102 Bdw Publishing (MPO), 99, 102 DC3 Dreams (ACP), 99
Index 297 Diffraction Limited (MaxIm DL), 99, 102 IRAF Group (IRAF), 102 MSB Software (AstroArt), 99, 102 Software Bisque, 99 Space Software (Starry Night Pro), 99 Willmann–Bell (AIP4WIN), 102 Standard Magnitudes catalogs of (stars), 25 CCDs and, 24 Henden sequences, 26 history of, 24 Landolt, 25 vs. instrumental, 27 Stephens, Robert, 148
T Telescope Considerations, 83 Optics, 83–84 Time getting from Internet, 111 Transforms Clear to Johnson V, 50
in photometry, 31
V Variable Stars cataclymic variables (CV), 17 Cepheids, 18 eclipsing binaries, 13 LPV, 18 Mira. See LPV modeling a system, 161 naming convention, 13 semi-regular, 18
Z Zero-Point as software feature, 107, 124 from Hardie method, 31 in lightcurve analysis applied to two sessions, 141 for adjusting session offsets, 140 for finding normalized flux values, 163 in photometry, 31