,
Dr. Dobbs J O U R N A L
#363 AUGUST 2004
SOFTWARE TOOLS FOR THE PROFESSIONAL PROGRAMMER
http://www.ddj.com
TESTING & DEBUGGING • • •
Testing Java Servlets Inside the Jpydbg Debugger Pseudorandom Testing Performance & System Testing Continuous Integration & .NET HTTP Response Splitting A New Kind of Attack Aspect-Oriented Programming–For C++? C++ & the Perils of Double-Checked Locking Computer Security When Format Strings Attack! Subversion for Version Control Runtime Monitoring & Software Verification
Building Callout Controls Tracing Program Execution & NUnit
Jerry Pournelle On Microsoft’s Aero, 64-bit Computing, & Dual-Core Processors The Return Of Dr. Ecco!
C O N T E N T S
FEATURES CONTINUOUS INTEGRATION & .NET: PART I 16
AUGUST 2004 VOLUME 29, ISSUE 8
by Thomas Beck
In this two-part article, Thomas introduces a complete Continuous Integration solution.
TESTING JAVA SERVLETS 26 by Len DiMaggio
Java servlets differ from other types of programs, thereby affecting your testing strategies.
THE JPYDBG DEBUGGER 32 by Jean-Yves Mengant
Jpydbg, the debugger Jean-Yves presents here, is a Python plug-in for the JEdit framework.
PSEUDORANDOM TESTING 38 by Guy W. Lecky-Thompson
Guy examines how you can test objects by creating specific test harnesses with verifiable datasets.
PERFORMANCE & SYSTEM TESTING 42 by Thomas H. Bodenheimer
When collecting performance data from dozens — if not hundreds — of computers, automation is a necessity.
OPTIMIZING PIXOMATIC FOR X86 PROCESSORS: PART I 46 by Michael Abrash
In the first installment of a three-part article, Michael discusses his greatest performance challenge ever— optimizing an x86 3D software rasterizer.
HTTP RESPONSE SPLITTING 50 by Amit Klein and Steve Orrin
HTTP Response Splitting is a powerful new attack technique that enables other attacks.
ASPECT-ORIENTED PROGRAMMING & C++ 53 by Christopher Diggins
Aspect-oriented programming isn’t just about Java. Christopher presents AOP techniques for C++.
BUILDING CALLOUT CONTROLS S1 by Thiadmer Riemersma
Thiadmer’s balloon-style Windows control is configurable for many purposes.
TRACING PROGRAM EXECUTION & NUNIT S7 by Paul Kimmel
NUnit and .NET’s TraceListeners help you eliminate bugs from code.
SYNCHRONIZATION DOMAINS S10 by Richard Grimes
The best place to avoid deadlocks is in the design stage — and that’s where synchronization domains come in.
C++ AND THE PERILS OF DOUBLE-CHECKED LOCKING: PART II 57 by Scott Meyers and Andrei Alexandrescu
In this installment, Scott and Andrei examine the relationship between thread safety and the volatile keyword.
WHEN FORMAT STRINGS ATTACK! 62
Herbert H. Thompson and James A. Whittaker
Format-string vulnerabilities happen when you fail to specify how user data will be formatted.
THE SUBVERSION VERSION-CONTROL PROGRAM 64 by Jeff Machols
The Subversion version-control program provides all the benefits of CVS — along with many improvements.
EMBEDDED SYSTEMS RUNTIME MONITORING & SOFTWARE VERIFICATION 68 by Doron Drusinsky
Doron examines runtime monitoring, focusing on its application to robust system verification.
COLUMNS PROGRAMMING PARADIGMS 73
FORUM EDITORIAL 8 by Jonathan Erickson LETTERS 10 by you DR. ECCO’S OMNIHEURIST CORNER 12 by Dennis E. Shasha NEWS & VIEWS 14 by Shannon Cochran OF INTEREST 83 by Shannon Cochran SWAINE’S FLAMES 84 by Michael Swaine
RESOURCE CENTER As a service to our readers, source code, related files, and author guidelines are available at http:// www.ddj.com/. Letters to the editor, article proposals and submissions, and inquiries can be sent to
[email protected], faxed to 650-513-4618, or mailed to Dr. Dobb’s Journal, 2800 Campus Drive, San Mateo CA 94403. For subscription questions, call 800-456-1215 (U.S. or Canada). For all other countries, call 902-563-4753 or fax 902-563-4807. E-mail subscription questions to ddj@neodata .com or write to Dr. Dobb’s Journal, P.O. Box 56188, Boulder, CO 803226188. If you want to change the information you receive from CMP and others about products and services, go to http://www.cmp .com/feedback/permission.html or contact Customer Service at the address/number noted on this page. Back issues may be purchased for $9.00 per copy (which includes shipping and handling). For issue availability, send e-mail to
[email protected], fax to 785838-7566, or call 800-444-4881 (U.S. and Canada) or 785-8387500 (all other countries). Back issue orders must be prepaid. Please send payment to Dr. Dobb’s Journal, 4601 West 6th Street, Suite B, Lawrence, KS 66049-4189. Individual back articles may be purchased electronically at http://www.ddj.com/.
by Michael Swaine
EMBEDDED SPACE 75 NEXT MONTH: September will come in bits and pieces, as we examine Distributed Computing from top to bottom.
by Ed Nisley
CHAOS MANOR 77 by Jerry Pournelle
PROGRAMMER’S BOOKSHELF 80 by Paul Martz
DR. DOBB’S JOURNAL (ISSN 1044-789X) is published monthly by CMP Media LLC., 600 Harrison Street, San Francisco, CA 94017; 415-905-2200. Periodicals Postage Paid at San Francisco and at additional mailing offices. SUBSCRIPTION: $34.95 for 1 year; $69.90 for 2 years. International orders must be prepaid. Payment may be made via Mastercard, Visa, or American Express; or via U.S. funds drawn on a U.S. bank. Canada and Mexico: $45.00 per year. All other foreign: $70.00 per year. U.K. subscribers contact Jill Sutcliffe at Parkway Gordon 01-49-1875-386. POSTMASTER: Send address changes to Dr. Dobb’s Journal, P.O. Box 56188, Boulder, CO 80328-6188. GST (Canada) #R124771239. Canada Post International Publications Mail Product (Canadian Distribution) Sales Agreement No. 0548677. FOREIGN NEWSSTAND DISTRIBUTOR: Worldwide Media Service Inc., 30 Montgomery St., Jersey City, NJ 07302; 212-332-7100. Entire contents © 2004 CMP Media LLC. Dr. Dobb’s Journal is a registered trademark of CMP Media LLC. All rights reserved.
http://www.ddj.com
Dr. Dobb’s Journal, August 2004
5
,
Dr.Dobbs J O U R N A L
PUBLISHER Timothy Trickett
SOFTWARE TOOLS FOR THE PROFESSIONAL PROGRAMMER
EDITOR-IN-CHIEF Jonathan Erickson
EDITORIAL MANAGING EDITOR Deirdre Blake MANAGING EDITOR, DIGITAL MEDIA Kevin Carlson SENIOR PRODUCTION EDITOR Monica E. Berg NEWS EDITOR Shannon Cochran ASSOCIATE EDITOR Della Song ART DIRECTOR Margaret A. Anderson SENIOR CONTRIBUTING EDITOR Al Stevens CONTRIBUTING EDITORS Bruce Schneier, Ray Duncan, Jack Woehr, Jon Bentley, Tim Kientzle, Gregory V. Wilson, Mark Nelson, Ed Nisley, Jerry Pournelle, Dennis E. Shasha EDITOR-AT-LARGE Michael Swaine PRODUCTION MANAGER Douglas Ausejo INTERNET OPERATIONS DIRECTOR Michael Calderon SENIOR WEB DEVELOPER Steve Goyette WEBMASTERS Sean Coady, Joe Lucca CIRCULATION SENIOR CIRCULATION MANAGER Cherilyn Olmsted ASSISTANT CIRCULATION MANAGER Shannon Weaver MARKETING/ADVERTISING ASSOCIATE PUBLISHER Brenner Fuller MARKETING DIRECTOR Jessica Hamilton AUDIENCE DEVELOPMENT DIRECTOR Ron Cordek ACCOUNT MANAGERS see page 82 Michael Beasley, Randy Byers, Andrew Mintz, Kristy Mittelholtz SENIOR ART DIRECTOR OF MARKETING Carey Perez DR. DOBB’S JOURNAL 2800 Campus Drive, San Mateo, CA 94403 650-513-4300. http://www.ddj.com/ CMP MEDIA LLC Gary Marshall President and CEO John Day Executive Vice President and CFO Steve Weitzner Executive Vice President and COO Jeff Patterson Executive Vice President, Corporate Sales & Marketing Mike Mikos Chief Information Officer William Amstutz Senior Vice President, Operations Leah Landro Senior Vice President, Human Resources Sandra Grayson Vice President & General Counsel Robert Faletra President, Group Publisher Technology Solutions Vicki Masseria President, Group Publisher Healthcare Media Philip Chapnick Vice President, Group Publisher Applied Technologies Michael Friedenberg Vice President, Group Publisher Information Technology Paul Miller Vice President, Group Publisher Electronics Fritz Nelson Vice President, Group Publisher Network Technology Peter Westerman Vice President, Group Publisher Software Development Media Shannon Aronson Corporate Director, Audience Development Michael Zane Corporate Director, Audience Development Marie Myers Corporate Director, Publishing Services
American Buisness Press
6
Dr. Dobb’s Journal, August 2004
Printed in the USA
http://www.ddj.com
EDITORIAL
Coulda, Woulda, Shoulda
F
rom time to time, we all reflect on what could have been. Let’s see, there’s Gary Kildall not returning IBM’s phone call, John Sculley’s dismissal of HyperCard, Pete Best leaving a band called the Beatles to make room for a drummer named Ringo Starr, and just about every version of UNIX other than Linux. The list goes on and on. For me, it was grass trimmers. Warned early on about the perils of BB-guns (“you’ll shoot your eye out!”) and lawnmowers (“you’ll cut your foot off!”), I once designed a lawnmower with flexible blades — specifically, nylon strings that spun around in circles. But then, I thought, if this was such a good idea, someone else surely would have thought of it. And, as it turned out, someone did — a few years later. Looking for a better way to cut grass around trees, George Ballas was inspired by the spinning nylon bristles at his local automatic car wash. According to legend, he then went home from washing his car, punched holes in a tin can, ran knotted fishing line through the holes, and attached the contraption to a rotary electric edger. Voilà! The “Weed Eater”— and another Texas millionaire — was born. These days, my coulda, woulda, shouldas are more along the lines of articles I wish I’d written or published. For instance, after spending a couple of maddening weeks tracking down and eliminating some really nasty computer viruses, I jokingly said that I’d support capital punishment for virus writers. What’d you know, but a few days later, Steven Landsburg’s article “Feed the Worms Who Write Worms to the Worms” popped up on Slate.com (http://slate.msn .com/id/2101297/). More than just a rant, Landsburg makes a solid, albeit tongue-in-cheek, argument for executing virus writers who, according to estimates, cost society $50 billion a year. (That said, Landsburg neglects to factor in revenue generated by antivirus companies, such as McAfee, Symantec, LavaSoft, and others.) Applying economic justifications usually associated with capital punishment, Landsburg points out that the usual deterrence benefit to society for executing a killer is about $100 million. He goes on to say that if a single execution would deter 0.2 percent of all virus writing, society would logically gain the same $100-million benefit that we do by executing murderers. Any more would be, as he says, simply gravy. I’ll leave it to you to mull over Landsburg’s arithmetic (not to mention ethics), but on a cost-benefit basis alone, his bottom line does suggest we’d be better off executing virus writers than killers. Of course, we all know that’s not the case — unless you’ve just spent a couple of weeks going nuts over a virus, worm, or Trojan. The biggest shoulda that Michael Dell admits to is “not getting into printers sooner.” In a New York Times article (May 24, 2004) that I woulda liked to have published, reporter Steve Lohr examines the issues of innovation versus distribution — with Hewlett-Packard personifying innovation and Dell cast simply as a distributor. There’s a lot at stake here for both companies. Overall, computer printers are a $106 billion-a-year pie, with HP carving out a $23 billion piece in selling 43.6 million printers. By comparison, Dell sold “only” 1.5 million printers in the first three quarters of 2003, although the company expects to sell 4 million this year. So in truth, Michael Dell is saying he woulda liked a bigger piece of the printer revenue pie, if he coulda. Compared to Dell, one of HP’s strengths is its research and development. Overall, says Lohr, the company invests more than $1 billion a year in printer R&D. Dell, on the other hand, focuses on private-label branding of — and adding features to — low-cost Lexmark printers, then hanging onto HP coattails in consumer distribution channels. To be fair, there’s nothing wrong with Dell’s focus on distributing, rather than innovating, printers. From algorithms to XP, the computer industry has been built, to paraphrase Sir Isaac Newton, by standing on one another’s shoulders. Recall that before Excel, there was VisiCalc. Before C++, there was C (and, before that, B). Before the RC6 block cipher, there was the RC5 cipher. And before Microsoft’s MS-DOS, there was Tim Paterson’s QDOS. Then there’s the whole concept of open source, which encourages innovation, improvement, and differentiation. Of course, I woulda said all this and much more if I had written the article that Lohr wrote. Maybe I shoulda. Maybe next time.
Jonathan Erickson editor-in-chief
[email protected] 8
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
LETTERS
, PO S S O BB
T
D
2
N CE
TS
2
More readTSC Optimization Dear DDJ, Tim Kientzle’s fascinating article “Optimization Techniques” (DDJ, May 2004) contains a rather complicated definition for readTSC( ) in GCC. The simple way to do it in GCC is: inline long long readTSC(void) { long long result; _ _asm_ _ volatile ("rdtsc" : "=A"(result)); return result; }
(Of course, you may as well use “inline” here because there’s no point in making a call to a single-instruction subroutine!) The "=A" constraint tells GCC that the result is returned in EDX:EAX, as per the GCC documentation for machine constraints: "A" specifies the "a" or "d" registers. This is primarily useful for 64-bit integer values (when in 32-bit mode) intended to be returned with the "d" register holding the most significant bits and the "a" register holding the least-significant bits.
The volatile is essential because it tells GCC that the result might change; without this, GCC may only invoke readTSC( ) once. Andrew Haley
[email protected] Editor’s Note: Also, see Mark Rustad’s letter in DDJ, July 2004. Bugs Aren’t a Laughing Matter Dear DDJ, In regard to Jonathan Erickson’s May 2004 “Editorial,” it is worth noting that software bugs are a serious problem. After all, the lead systems programmer at Warner Brothers (“bugs” Bunny) was continually asking, “What’s up, doc?”. On a more serious note, bugs can be extremely expensive. I worked on two embedded systems projects (to produce new hardware and the software to run it) that failed. On one, the software was the primary cause of failure. On the other, both hardware and software were faulty. 10
Both projects burned more than $35 million in funding. On both projects, management wanted to “get it out the door” as soon as it was “good enough,” in the face of efforts by several technical people to get support for correcting the faults. Nevertheless, I do not consider management to be the sole cause of the failures. We do not live in a culture that values pride of workmanship. Technical people are still able to do reliable work, but their managers are sometimes under great pressure to cut corners, particularly when mistakes have been made early in a project. So, we need tools that reduce the cost of checking our work, but we also need to recognize the primary guarantee of high quality: the pride of the builders. Ian Gorman http://www.gorman.ca/ Mtlib Library Dear DDJ, It seems that everything old is new again. The technique described by Marco Tabini in “The Mtlib Memory-Tracking Library” (DDJ, October 2003) has been around in various guises for as long as memory has been allocated. It’s a worthwhile technique, though, so it certainly is worth introducing to a new generation of programmers. Marco is correct in identifying the process of hijacking memory-management calls as the core problem. In the “good” old days we had access to source of everything, so it was easy to modify the routines. These days, it’s not so simple, but there are a few alternative techniques and considerations that can be useful. The most valuable is the direct editing of object code and libraries. It is possible, with a suitable binary editor, to rename pretty much anything known externally. When doing this, you must follow the simple rule that you can replace characters in a name, but never add or delete them: The new name must be the same length as the old. Renaming things this way opens up interesting possibilities: You can substitute your own memory-allocation routines for those in a third-party library, particularly useful when a vendor’s code has bugs. You can build a replacement for the standard library that uses your own functions for all memory allocation, by changing the library’s names for those functions and supplying your own to the linker. (Contrary to Marco’s assertion, it is often useful to have access to all memory allocation in a program, not just what takes place in a specific block of code.) An unrelated technique that is commonly used with debugging-version memoryallocation routines is to add checks for overruns or uninitialized data bugs. A simple approach is to bracket each allocated block Dr. Dobb’s Journal, August 2004
of memory with additional bytes initialized to some known but uncommon value. When the memory is released (or more often) a check is made to see that the values have not been changed. Allocation routines can also initialize allocated memory to a different uncommon value, or a random number, to detect the use of uninitialized memory. Implicit in this is the concept of allocating more memory than was requested for use by the debugging routines. Doing so removes the need to build structures to save debugging information and opens the door for all sorts of interesting techniques. Jim Connell
[email protected] Broader C++ Compiler Coverage Dear DDJ, In the article “C/C++ Compiler Optimization” (DDJ, May 2004), Matthew Wilson compares the optimization for different C/C++ compilers. However, he didn’t include the C/C++ compilers on UNIX platforms, such as xlC for AIX, aCC for HPUX, and the C/C++ compiler with Sun Forte for Solaris. Many programmers develop software on UNIX platforms and, therefore, cannot benefit from Matthew’s article. Reza Asghari
[email protected] Matthew responds: Thanks for taking the time to write Reza. The simple answer is that the only UNIX flavor I currently have access to is Linux and I only have two compilers —GCC and Intel — for that platform. Wouldn’t be much of a story, would it? I’d love to be able to get hold of other compilers on other platforms, but most people these days seem circumspect about allowing any remote access into their systems. Simulation versus the Real Deal Dear DDJ, Jerry Pournelle’s column “Simulation & Modeling” (DDJ, April 2004) correctly shows the weaknesses of computer simulations. However, I think he misses one important difference between climate and footballmatch simulations: If a football simulation results in a slight hint towards an expected outcome of a match, we can safely ignore it. It won’t cause much damage if the match will, in reality, end this or the other way. But if several climate simulations give us a hint that not reducing our CO2 emissions may have fatal consequences for life on earth, we might better be careful and take precautions, just to be sure to prevent the possible (though not sure) catastrophic ending that the simulations predict. Manfred Keul
[email protected] DDJ http://www.ddj.com
DR. ECCO’S OMNIHEURIST CORNER
Smooth As Ice Dennis E. Shasha
T
he tabloids made it a cover story. It was unfashionable to build very tall buildings, so Donald Pump wanted to build the largest possible ice rink. “He wants to buy at least one of our Zamboni ice resurfacers,” Ecco’s visitor, Tony Zamboni explained. “We want to design a complete solution for him. We have a tradition to live up to.” “A tradition?” Ecco asked. “Yes,” Tony responded. “You have to understand my grandfather’s passion for ice. During the 1930s, Frank Zamboni manufactured ice for boxcars full of lettuce. When that business declined, he began building ice rinks in southern California. The climate there is tough on ice and he had to resurface frequently. This meant bringing out a tractor, shaving the surface, removing the shavings, spraying water over the surface, and allowing the water to freeze. During the hour this took, many of his customers would leave. So he invented — and then reinvented many times over the years — an ice resurfacer. These are still called ‘Zambonis.’ The tradition is that we must continuously improve our machines and the way they are used. “The basic problem is that when a Zamboni drives, everyone must just sit and wait. It no longer takes an hour, but it may take half an hour. In this new millennium, that seems way too long. So, we want to make it accomplish its job as quickly as possible. The trouble is that the Zamboni doesn’t have a very tight Dennis is a professor of computer science at New York University. His latest books include Dr. Ecco’s Cyberpuzzles: 36 Puzzles for Hackers and Other Mathematical Detectives (W.W. Norton, 2002) and Database Tuning: Principles, Experiments, and Troubleshooting Techniques (Morgan Kaufman, 2002). He can be contacted at
[email protected]. 12
turning radius. For this reason, it must sometimes drive over spots it has already resurfaced. The question is how to minimize the time. “Knowing that you are a mathematician, we have abstracted the problem to the slightly asymmetric shape Pump wants it to be like, as shown in Figure 1: A 4×8 grid of points with the corners cut off plus four more nodes on the top (where the winners stand when there are figure-skating tournaments)—32 points in all. “The distance between neighboring points is roughly the width of a Zamboni, so your goal is to have the Zamboni drive over every node at least once. At every darkened point (node), the Zamboni can turn 45 degrees from the direction it is moving in.” “So, if the Zamboni has moved from node A to node B as in Figure 2, it can move to node C if the angle formed by the rays AB and BC is 45 degrees,” Ecco’s niece Liane interjected. “In other words, certain paths are acceptable and others are not.” “Correct,” said Tony to the 16-year-old. “Going from a node to a neighboring node takes 30 seconds. What is the smallest number of minutes you need to en-
ter at the bottom (you can choose any bottom-most node), touch every node and then exit by some bottom node? Entering and exiting is from a driveway that is perpendicular to the bottom of the rink, so you can enter and exit at any angle you like.” Liane and her younger brother Tyler were able to find a solution in under 20 minutes. “That improves our time by a third,” the young Zamboni said. “The Donald should be pleased.” Reader: Would you like to see whether you can improve on this time? After Tony had left, Ecco turned to his niece and nephew. “Nice work,” he said. “How much bigger could you make the rink and still do it in the same time? Assume you have to keep the nodes of Pump’s rink, but could add some. How many Zambonis would you need to cut the time down to 10 minutes for the Pump rink as it stands?” I had to leave before I heard the answers. Reader: Would you like to give these a try? Check next month’s Dr. Ecco installment for the solution. DDJ
Impossible Path
Figure 1: The theoretical minimum number of node traversals is 31. How close can you come to that? Zamboni Problem: There are 32 nodes. You must start and end at an exterior node. The machine cannot turn more than 45 degrees at a node. Dr. Dobb’s Journal, August 2004
Possible Path
Figure 2: Certain paths are acceptable and others are not. http://www.ddj.com
SECTION
A
MAIN NEWS
Dr. Dobb’s
News & Views
Bees in the War Bonnet BEA and the Apache Software Foundation have jointly announced Apache Beehive, an open-source project based on the runtime application framework from WebLogic Workshop. The project’s stated aim is to create a J2EE environment capable of challenging Microsoft’s .NET Framework for ease of development and portability. Developers using Beehive “will be able to create web services, web applications, and Java controls using Workshop’s metadata-driven programming model, and then run these components on any J2EE application server” (http://dev2dev.bea .com/technologies/beehive/). The BEA-donated code includes the Controls architecture, which adds support for JSR 175 metadata to the JavaBeans component model. It aims both to simplify and to standardize the creation of back-end resource access components, “So defining and configuring client access to J2EE resources can be a property or wizard-driven process, and can be represented using visual design metaphors.” Similarly, the NetUI Page Flow subproject uses JSR 175 metadata to provide a visual development model for Struts web applications. Beehive also addresses web services and XMLBeans development.
New Linux Code Must Be Certified Free A new report issued by the Microsoftfunded Alexis de Tocqueville Institution alleges that Linus Torvalds stole the original Linux code from MINIX and other UNIX sources. Andrew Tanenbaum, author of MINIX, calls the article “patent nonsense,” and Torvalds himself jokes that San-
14
ta Claus and the Tooth Fairy must be credited as the true authors of Linux. However, in the wake of this report and the ongoing SCO suit — which also claims that proprietary code has been improperly included in Linux—Linus Torvalds and kernel maintainer Andrew Morton are requiring new code submissions to be accompanied by a certificate from each of the contributors. The Developer’s Certificate of Origin (http:// www.osdl.org/newsroom/press_releases/ 2004/2004_05_24_beaverton.html) affirms that the developers have the right to contribute the code under an open-source license, and lets code submissions be tracked through the kernel tree.
National Medal of Technology Nominations Open U.S. Secretary of Commerce Donald Evans has opened nominations for the 2005 National Medal of Technology, given annually to individuals, teams, or companies “to recognize those who have made lasting contributions to America’s competitiveness, standard of living, and quality of life through technological innovation, and to recognize those who have made substantial contributions to strengthening the Nation’s technological workforce.” Individuals, teams, and companies or divisions are all eligible for nomination. Past honorees include Dean Kamen, Vint Cerf, Robert Kahn, and IBM. Nominations (http://www.technology.gov/ Medal/Nomination.htm) will be accepted until July 28.
Eclipse Names Executive Director Mike Milinkovich, formerly the vice president of OracleAS technical services, has
Dr. Dobb’s Journal, August 2004
DR. DOBB’S JOURNAL August 1, 2004
been named executive director of the Eclipse Foundation. Milinkovich is no stranger to IDE programming. Before serving at Oracle, he worked on the Smalltalk IDE ENVY/Developer. More widely noted, however, is what Milinkovich hasn’t done — worked at IBM. “I have an understanding and perspective that is certainly not IBM-centric,” Milinkovich says, pointing out that his previous employers (Oracle and, before that, WebGain) were “very much competitors to IBM.” Milinkovich’s background is seen as significant, as the Eclipse Foundation’s independence from IBM is still disputed. Before Milinkovich was appointed, Sun Microsystems noted in an open letter, “The organization’s bylaws have given the director an unusual amount of power to form projects and assign resources. Will the director be an impartial guardian of the community (or be partial)?” Sun had no official comment on Milinkovich’s appointment, but Milinkovich said that he had exchanged e-mails with executives at Sun, and “the response that I got back was a warm one.”
Intel Opts for CPL Intel has announced plans to release its next- generation firmware foundation code (the successor to the 20-year old PC BIOS) and a firmware driver development kit under the Common Public License (CPL) open-source license. Under the CPL, any change in the foundation code and DDK made by one company must be visible and available to all. The next-generation firmware project is an implementation of the Extensible Firmware Interface (EFI).
http://www.ddj.com
Continuous Integration & .NET: Part I Weaving together an open-source solution Thomas Beck hen questioned by a colleague as to what open-source tools and methodologies I thought would most likely make the transition to .NET, my response was immediate — Continuous Integration tools. Continuous Integration is a term that describes an automated process that lets teams build and test their software many times a day. Continuous Integration, and Java-based tools such as Ant and JUnit that support it, have been a hot topic in the Java community for several years and are the subject of a number of popular books. In this first installment of a two-part article, I examine the building blocks of an open-source Continuous Integration solution. Several basic tools used to support Continuous Integration were ported to the .NET environment during the 1.0 release of the Framework. Other tools, such as CruiseControl and Clover, have been ported more recently. Furthermore, certain other tools such as NDoc are indigenous to .NET and only conceptually related to their Java counterparts (Java Doc, in this case). For the most part, articles written to date about these tools have addressed only a subset of the tools (usually NAnt and NUnit), thus failing to weave together a holistic Continuous Integration solution. Moreover, the unit-testing examples are function based and fail to address the database-driven reality of today’s enterprise applications.
W
In this two-part article, I introduce a complete Continuous Integration solution that encompasses automated builds, datadriven unit testing, documentation, version control, and code coverage. All of these tasks are performed automatically every time the project source code is modified. The results of the build (including the complete build history), the application’s user interface, the MSDN-style documentation, .NET Framework Design Guidelines conformance report, and the code coverage results can then be made available from a central project build site. Figure 1 is a conceptual architecture of this solution. Additionally, I introduce the components necessary to address the various tasks integral to Continuous Integration, adding new tasks to the NAnt build file to accommodate the tools as they are introduced. All of these tools, with the exception of Clover (an exclusive preview version of Clover.NET is available electronically; see “Resource Center,” page 5) and FxCop (which is covered under a Microsoft EULA), are covered by one of the common open-source licenses. Although the solution presented here was originally built on a Windows 2000 platform (including IIS), it can be easily modified to suit the needs of your application. These changes could accommodate a fully opensource .NET environment (Mono and Apache mod_mono, for instance) or align more closely with a pure Microsoft environment (Visual Source Safe, SQL Server, and the like).
Thomas is a manager with Deloitte. He is currently working on the firm’s Commonwealth of Pennsylvania account and is located in Harrisburg, PA. Thomas can be reached at
[email protected].
NAnt: Building a Simple Executable NAnt (http://nant.sourceforge.net/), the .NET port of Ant supported by the Jakarta project, is an XML-based build tool that is
16
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
(continued from page 16) meant to replace the combination of the Make build program and other scripts cobbled together by projects to build their applications in the past. Before installing NAnt, you need to have already installed IIS or the Microsoft Personal Web Server and the .NET Framework classes. If you haven’t done so, it’s important to install the Framework classes after the web server. It is also worth noting that Microsoft introduced MSBuild, its own .NET build tool, at the 2003 Professional Developer’s Conference. Expect to see this tool integrated into the upcoming Visual Studio “Whidbey” release as an alternative to NAnt. Once you have downloaded NAnt, extract the files into the /Program Files/NAnt/ folder and change the Windows PATH definition to include the \NAnt\bin directory. To verify that NAnt is installed correctly, type nant at the DOS command line. You should get a build error. That’s okay because you haven’t defined a build file yet. NAnt requires that the build file is either present in the directory you are in when you invoke the tool or that you specify the build file that you want to use with the -build file: option. The first NAnt build addresses the most basic of all programming scenarios — building a single binary executable that implements the standard “Hello World” functionality. Both the build file and C# and VB.NET source code for this example are available electronically. Listing One, the first NAnt build file, outlines the basic structure of this file and illustrates the critical components of an NAnt build file: • Project. The root element of the build file. Projects can contain a number of properties and a number of tasks. • Property. A property represents a name/value pair that can be used in the tasks. • Target. Targets represent a particular component of the build process (cleanup, build, execute). Dependencies may be established between the various targets in a project. Each target consists of one or more tasks. • Tasks. Tasks are discrete pieces of work that are performed within a target. This build file includes the mkdir and csc tasks (see http://nant.sourceforge.net/help/tasks/index.html for a complete list of tasks for NAnt). To build the executable, go to the directory where your build file is located and type nant at the command line. If there is more than one build file in your directory, make sure to use the -build file: option. If NAnt reports difficulty finding your compiler, make sure that the directory where the .NET compiler resides is reflected in your PATH. Moreover, depending on the version of the .NET Framework you’re running, check the nant.exe.config file that can be found in the NAnt \bin directory. In the
node of the file, you can set the default version of the framework that you want to use. The result of the NAnt build is an executable “Hello World” program that can run from the command line. Of course, it would have been easier to compile this program from the command line. NAnt’s true strength doesn’t begin to show until the builds get complex and time consuming and various external components (unit-testing, documentation) need to be integrated into the build process. NAnt Part II: Building a Component that Accesses Data The move from building an executable to building a component is, from an NAnt standpoint, a relatively simple transition. There are, however, other significant implications of moving to a DLL-based application. Foremost amongst these implications is the increase in the number of software objects in your appli18
Figure 1: .NET Continuous Integration environment. cation and the consequential increase in the complexity of building, testing, and documenting your software. This is where NAnt really shines. Here, I build and deploy a business object that exposes three core business methods. These three methods were selected because they illustrate the different aspects of unit-testing XML datasets, .NET-based collections, and traditional integer return type functions: • public dataset GetAuthors( ). A method used to get a dataset containing the names of all of the authors in the authors table. • public StringCollection GetTitlesForAuthors(string authorID). A method that returns a string collection containing all of the titles associated with a particular author ID. • public int TotalSalesForAuthors(string authorID). A method used to calculate the total book sales associated with the author identified by a particular ID. The example component included with this article uses MySQL (http://www.mysql.com/) as a database leveraging the Open Source ByteFX ADO.NET database driver (http://sourceforge.net/ projects/mysqlnet/). The ByteFX driver need only be extracted to a program file directory; NAnt makes sure that the DLL is available when compiling your component. The data access examples use a database entitled “pubs,” which contains a subset of the tables available in its namesake database available in SQL Server. SQL DDL statements for the setup of pubs are included as part of the source code if you are using a database other than MySQL or SQL Server. Since all data access is done via ADO.NET, these programs can be easily modified to communicate with your database of choice as long as you are using an ADO.NET-compatible driver for database connectivity. The build file for the component build is not much more complex than our previous build file; see Listing Two. There are several noteworthy changes included in this build file. First, the target statement has been modified to create a “library” and the inclusion of the import statements corresponding to the imports in our program. Second, references, which refer to metadata in the specified file assemblies, have been included for each of the DLLs needed for the build. Finally, a new target, deploy, has been added. It copies the .dll and the .aspx files to the appropriate location on the web server. This target is dependent on the build target and that target is now specified as the default target for the project. When you’ve run the build program, you will have created the appropriate library in your build folder and your web
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
(continued from page 18) server’s \bin folder. You can run the SQL DDL files to set up your database (as needed) and check out the web site you’ve built. This is the first step in your comprehensive build solution. The next step will be to integrate unit testing into our automated build process. NUnit: Automating Your Unit Testing NUnit (http://www.nunit.org/) is a tool for automating .NET unit testing, regardless of the .NET language you choose to program in. The NUnit tool includes a standalone GUI for managing and running unit tests as well as a command-line interface. In addition, plug-ins have been written for several popular .NET IDEs such as Visual Studio.NET and the open source #Develop. NUnit can also be invoked and automated using a special NAnt task. It is the last method that I explore here. Installing NUnit is as simple as double clicking on the Windows MSI installer file. Once you have NUnit installed, you can jump into unit testing the business object that you previously built. Although I concentrate on testing the business logic in this article, a fairly capable sibling application named “NUnitAsp” is available for testing ASP.NET server-side logic.
In Listing Three, the database target provides for the setup of the initial database objects and test data that are required for all of the tests to be run. In this case, I make a direct call to the mySQL program with the execute (-e) option to execute the SQL scripts. Calls to analogous utilities for your database of choice could be substituted here, as necessary. Note that the special " values in the execute call are XML standard delimited quotations. Listing Four represents the test target and covers the setup and invocation of the NUnit tests. First, the nunit.framework.dll is copied to the build directory. Then the unit-testing class BusObjTstCS is compiled. Finally, the NUnit tests in this dll are invoked using the NAnt task. The special XML formatter element causes the results of the unit test to be written out to an XML file. This file is later used by our continuous integration engine, CruiseControl.NET, to interpret and display our testing results. At the heart of the Continuous Integration solution is automated unit testing, supported by NUnit. NUnit tests can be written in any of the .NET languages. The NUnit test fixture marks the class that contains the test methods. Each of these tests can test for a number of conditions including true/false, null/not-null, and equal/not-equal. Table 1 outlines NUnit-testing fundamentals.
NUnit is a tool for automating .NET unit testing
20
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
(continued from page 20) The test fixture for this article contains a number of tests to exercise the business logic in our component: • public void init( ). Technically, this is a setup fixture and not a test. For our purposes, this class is used for business object instantiation and database setup. • public void GetAllAuthors( ). One of the most important test fixtures in our example. This fixture displays how to unit test database results; specifically a DataSet in this example. The test uses the getXml( ) and readXml( ) methods of the DataSet class to compare the XML rendering of the dataset returned by our GetAuthors( ) method to a pregenerated XML test file on the file system. • public void GetMultipleTitles( ). This fixture, along with the similarly inclined GetSingleTitle( ) and GetNoTitle( ), test that the correct title(s) and correct number of titles are returned by the GetTitlesForAuthors( ) method. • public void GetTotalSalesMultipleBooks( ). This fixture, along with GetTotalSalesSingleBook( ) and GetTotalSalesNoBook( ), test that the TotalSalesForAuthor( ) method returns the correct sales total for a given author. The output of the test is written to the command line during the build and is also written out in an XML format. A failed result causes the entire build to fail (the haltonerror and failonerror attributes of the nunit2 task can be used to control this behavior) and outputs both an expected result and the actual result to the command line. Once you’ve achieved a clean build and are confident the tests are working, you can begin documenting the system. NDoc: Professional Documentation Simplified Microsoft established solid groundwork for the generation of professional quality documentation with the release of the .NET Framework. The C# compiler uses reflection to extract specially tagged comments from the assemblies’ metadata and create XML documentation files. The open-source NDoc tool (http://ndoc .sourceforge.net/) then uses these XML files to create MSDN-style HTML help files. Currently, this functionality is only available for C# programs although all indications are that VB.NET will be capable of generating these XML comment files as well with the upcoming Visual Studio Whidbey release. In the meantime, a special NUnit Attribute
Description
Test Feature
Marks a class that contains test methods. Contained in the NUnit.Framework namespace. Test Marks a specific method within a class that has already been marked as a TestFixture. Test takes no parameters and uses one of several assertions: IsTrue/IsFalse, IsNull/IsNotNull, AreSame, AreEqual, Fail. Setup/Teardown Performs test environment setup tasks (such as establish database connection) prior to running the tests and restoring the environment after the tests are complete. Expected Exception Specifies that the execution of a test will throw an exception. The test will pass if the exception is thrown. Ignore Specifies that a test or test fixture should not be run for a period of time. This is preferable to commenting out or renaming tests.
Table 1: NUnit testing fundamentals. 22
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
(continued from page 22) tool (VB.DOC) is available to harvest VB.NET code for comments and to create the XML comment files. Both NDoc and VB.DOC can be integrated into the build process via NAnt tasks. You will also need Microsoft’s HTML Help Workshop (http:// msdn.microsoft.com/library/default.asp?url=/library/en-us/htmlhelp/ html/hwMicrosoftHTMLHelpDownloads.asp), which lets you compile your HTML help files into a standard CHM Windows help file format. If you use VB.NET as your .NET language or you would like to test out the VB.NET examples from this article, you will also need VB.DOC and the VB.DOC NAnt task (http://vb-doc.sourceforge.net/). In addition, to install the VB.Doc NAnt task, you have to copy the VB.Doc DLL’s from the VB.Doc /bin folder to the NAnt /bin folder. To have your documentation automatically generated for the code, you need to change your build file and add the appropriate documentation comments to your source code. I concentrate on the former activity here. Combined with the source code for this article, several of the references at the end of this article will give you a good idea of how to document your classes. Due to the lack of native XML documentation of file-generation capabilities in VB.NET, there are distinct differences in the build file based upon what particular .NET dialect you choose. The VB.NET build file includes the addition of a new document target (Listing Five), which includes calls to both the vbdoc and the ndoc tasks for Listing One <project name="Hello World" default="build" basedir="."> <property name="basename" value="HelloWorld"/> <property name="build.dir" value="cs_build"/> <mkdir dir="${build.dir}"/> <sources>
Listing Two <project name="Business Object" default="deploy" basedir="."> <property name="basename" value="BusObjCS"/> <property name="build.dir" value="cs_build"/> <mkdir dir="${build.dir}"/> <sources> <fileset basedir="aspx"> fileset>
Listing Three <exec program="${sql.program}" commandline="${sql.db} -u root "source ${sql.drop_file}""/>
24
generating documentation. The C# build file includes a similar document target, minus the call to vbdoc. Instead, the C# build file has an additional line to accommodate a compiler argument in the csc task informing it to generate the XML document: <arg value="/doc:${build.dir}\${basename}.xml"/>
The source files (available electronically) contain comments to support basic documentation features such as method descriptions, inputs/outputs, and usage examples. In addition to basic MSDN-style documentation, NDoc also supports the ability to link back to the documentation for .NET Framework classes (provided that the classes are installed on the same machine as the NDoc documentation) and to use an tag to link in external XML documentation sources. The latter feature is especially useful on larger projects to avoid the commingling of source code and documentation and the associated contention issues for access to combined source code and documentation files. With the tools outlined in this installment, you will be able to put together a process to automate the building, testing, and documentation of .NET applications. In the next installment of this article, I integrate these tools under the umbrella of a CVSbased continuous integration process and introduce the concept of code-coverage testing. DDJ <exec program="${sql.program}" commandline="${sql.db} -u root "source ${sql.load_file}""/>
Listing Four <sources>
Listing Five <mkdir dir="${help.dir}"/> <sourcefiles> <summaries basedir="${build.dir}"> <documenters> <documenter name="MSDN"> <property name="OutputDirectory" value="${help.dir}"/> <property name="RootPageContainsNamespace" value="false"/> <property name="SortTOCByNamespace" value="true"/>
Dr. Dobb’s Journal, August 2004
DDJ http://www.ddj.com
Testing Java Servlets Test considerations you need to keep in mind Len DiMaggio
W
hat characteristics differentiate Java servlets from other types of programs? How do these characteristics affect the manner in which you go about testing servlets? If you’re going to test servlets, then you’re going to have to be able to answer questions such as these. In considering servlets and how to test them, it’s useful to review some fundamentals about what makes a servlet a servlet. The Java Servlet Specification (http://java.sun.com/products/servlet) defines a servlet as a:
Moreover, running applications through web servers supports distributed application configurations, where components can be hosted on multiple physical servers. Being able to load applications dynamically to be run by a web server makes down time a thing of the past. The Internet and Web mean that you can no longer assume that 2:00 AM your local time zone is a quiet time when you have customers everywhere from Boston to Belarus. The test implications here are the need to verify both distributed configurations and the ability of the servlet to run unattended over time.
…Java technology based web component, managed by a container, that generates dynamic content. Like other Java-based components, servlets are platform independent Java classes that are compiled to platform neutral bytecode that can be loaded dynamically into and run by a Java enabled web server…
The test implications of servlets being managed by containers are that you have to be concerned not only with how your servlet handles longevity (you want your servlets to run unattended for extended periods); scalability (you want the application supported by your servlets to be able to grow to meet increased usage); integration with other software (such as databases to support dynamic content); and security. Len is a senior software quality engineer and manager for IBM/Rational Software. He can be contacted at lgdimagg@us .ibm.com. 26
With this in mind, servlet-specific software test considerations you should keep in mind in designing and executing tests for servlets can be divided into these logical groupings: • Distributed application configurations. • Ensuring security. • Integrations with other programs. • Longevity and scalability. Distributed Application Configurations One factor that can increase a web application’s reliability is hosting it on multiple web servers, instead of on a single Dr. Dobb’s Journal, August 2004
server. If one server goes out of service, other servers can continue to respond to clients’ requests. You can see a simple illustration of this distributed approach if you use the nslookup command to retrieve the IP addresses of a well-known, highcapacity web site. From a user perspective, the distributed architecture in Figure 1 is seamless. Users enter the same URL, and their requests are handled. From a testing perspective, however, there are functions that you have to think about. Maintaining persistent data and state. In preInternet days, a major aspect of testing an application built on distributed servers involved verifying that once clients made connection to servers, the connections would be preserved by fault-tolerant mechanisms if a server failed. With servletbased web applications built around the HTTP protocol, you never have a persistent connection because the protocol is stateless. When you access a content-rich web site such as CNN or use a web application, your view is that you have fixed, dedicated connection from your client to the server. That view, however, is an illusion. The single “click” that displayed content in your browser actually sets in motion a potentially large number of discrete HTTP GET requests. When you’re testing servlet-based web applications, it may be the case that the “servlet environment” actually consists of instances of the servlet running on multiple physical servers. Configurations such as this ensure that applications will handle higher traffic than a single physical server and removes the server as a single point of failure. However, it also means that the application cannot maintain persistent data on a servlet’s server. If the servlet you’re testing supports a distributed web application, then it may maintain state via Java HttpSession objects API to store user information in session objects. These objects have to be made persistent by storing them http://www.ddj.com
(continued from page 26) in a central resource that is available to all instances of the servlet; for example, on an application or database server. With this in mind, the sort of tests you should perform include continuous session maintenance, which gives you the opportunity for some negative test scenarios. For instance, you can have test clients open sessions via servlets on one
server, then halt the web server or servlet container on that server and verify that the users’ sessions are maintained when they continue accessing the application via servlets on other servers. Coping with outages. One of the goals of any distributed architecture is to avoid a single point of failure. Distributing an application’s servlets across multiple physical servers can help do this, but there may
> nslookup Name: cnn.com Addresses: 64.236.24.4, 64.236.24.12, 64.236.24.20, 64.236.24.28 64.236.16.20, 64.236.16.52, 64.236.16.84, 64.236.16.116 Aliases: www.cnn.com
Figure 1: nslookup output.
still be single points of failure in the application’s middle (application servers) or EIS tiers. What sorts of tests should you perform? This is another opportunity for negative test scenarios. You want to verify how the servlet under test reacts to either not being able to make or maintain a connection to its database or application server. Unlike the stateless client connections, these connections must be persistent. Depending on the details of the application’s design, you may see configurable retry algorithms. Regardless of the application’s design details, you should check that informative error messages are returned to clients in case resources needed by the servlets are not available. Handling logging and logs. If the servlets and their servlet containers are running on multiple servers, then their corresponding logs are either being generated on multiple servers or are being written to multiple files/directories on some shared file system. Either way, logging information ends up in more than one place. The servlet implementation can organize things that are explicitly generated by the servlet, but you’ll also have to deal with the logs generated by the servlet’s web server and servlet container. This distribution of logging can complicate efforts to debug problems as it makes it difficult to trace through a specific sequence of a client’s interactions with the servlet — and the application has to be able to manage the logs on multiple servers. What sorts of tests should you perform in this case? If you’re planning on implementing a distributed, servlet-based application, then you’ll have to think about some mechanism for collecting and parsing logs from multiple sources. Ideally, this mechanism includes a means to merge multiple logs together so that you can follow a client’s actions as it accesses multiple servlets. Tests for this type of mechanism can be divided into these categories: data collection (are the logs collected?), results calculation (are the logs merged and analyzed correctly?), and report generation and distribution (are reports created and distributed?). One of the more mundane aspects of writing out large numbers of logs is that the log files must be managed or you’ll run out of space. Testing for this is relatively easy— just fill up the disk and see how the servlet reacts when it tries to write to its log. A better approach is to have a utility (perhaps another servlet) monitor free disk space and proactively delete/archive old logs or send an alert to sysadmins. Ensuring Security The J2EE platform provides for policybased, configurable “fine-grained access”
28
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
control (http://java.sun.com/features/1999/ 12/j2ee.html). Whenever programs (including servlets) are loaded, they are assigned a set of permissions based on the security policy currently in effect. Each permission is tied to and enables access to a particular resource (being able to access a specific host and port, for instance). One common approach is to use Access Control Lists (ACL) (http://java.sun.com/j2se/ 1.4.2/docs/api/java/security/acl/Acl.html), supported by the java.security.acl interface. ACLs are data objects that tie resources and permissions to principals (individual users and groups). The extent to which security can be customized for an application, coupled with the Java’s built-in security features (exception handling, for instance) make it possible to create secure servlets. This doesn’t mean you can forget about testing for security, however. What aspects of security specifically affect servlets? It’s likely that the application supported by the servlet under test supports various levels of user permissions. Some users require administrativelevel permissions, while others will be limited to end-user permissions only. You’ll want to verify that these levels of permissions are enforced for various users and groups. You’ll also want to verify that both positive and negative permissions are enforced. The safest security policy is to disable all access by default, and then only enable access explicitly on a selective basis. A good negative test is to delete or otherwise disable users after they have logged in. The servlet should recognize that the user is no longer valid by requiring that the user be periodically reauthorized, even if a valid session object exists. It’s always possible for an application’s security to be compromised, regardless of how well its security policy is designed. What you want to look for in testing is to both verify the security policy’s design and how it is actually implemented. What sorts of tests should you perform in this case? Encryption can provide a good measure of protection for passwords and other data, so it’s important to verify that the data that’s supposed to be encrypted actually is. It’s also important to verify that the encryption isn’t inadvertently sidestepped. For example, encrypting passwords or other customer data used to establish a session is a good idea. Writing that same data in plain text to a log file is a bad idea. It’s often the case that servlet-based applications provide for remote administration via an admin servlet. This can be convenient, but dangerous. Permissions to enable access to this servlet must be strictly controlled. In addition to verifying these permissions, you should verhttp://www.ddj.com
ify that the use of this servlet is tracked via an audit trail. Remember that one of the characteristics that set servlets apart from other types of programs is that they run within a servlet container. Accordingly, any security flaws in the container directly affect the security of the servlet. These security flaws may take the form of actual functional bugs in the container, or they may be caused by an error in how the container has been configured. Then again, security flaws are sometimes caused by people simply doing foolish things such as running the container out of the (UNIX) root user account, instead of from a user account dedicated to the container (and configured with the permissions needed by the container, but no more). Integrations with Other Programs John Donne’s famous “No man is an island, entire of itself…” is true for software these days, too. It’s rare to find a program that doesn’t have to integrate with some other programs. Most software products are built with components supplied, at least in part, by third parties. Sometimes, this means open source or freeware, and sometimes partnering with another company. Either way, testing the integration of components, systems, servers, servlets, and the like is becoming an increasingly important category of testing. When you integrate multiple systems, you are in effect building a common language to bind the systems together. The elements of this language include software dependencies, configuration data, internal and external process coordination and data flows, and event reporting and security. Integration testing (“aggregation testing” might be a better term) verifies the operation of these language elements and of the entire integrated system, and identifies the points of conflict between them. The servlet under test will very likely interact directly with other server-resident programs, possibly including other servlets. In some respects, these interactions are no different than interactions between nonservlet programs. The distinguishing factor for servlets is that these interactions take place on and between server-based software. When it comes to the types of tests to perform, you’ll want to verify that the startup dependencies function correctly for the servlet under test. There are a couple of different levels of dependencies to consider. First, there are dependencies between the servlet container and programs upon which the servlet depends. A good example of such a program is a database server (process). The mechanics of handling this vary with the Dr. Dobb’s Journal, August 2004
29
operating system. For PCs the dependencies are specified in the services definitions, while for UNIX systems it’s done via startup scripts. There are also dependencies between servlets to be considered. The mechanics of handling this will vary with the servlet container being used. For Tomcat, it’s handled via the directive. Verifying dependencies is another good opportunity for negative testing in that you will want to determine what happens if programs, utilities, and so on, that the servlet is dependent upon are not in place. The servlet should handle the missing dependencies gracefully by logging (and displaying for users) informative error messages. These types of tests should be performed when the servlet is started and
after the servlet is running, as you’ll be exercising different code when the servlet tries to start up when a dependency is not in place and when it loses contact with that dependency while it’s trying to service requests from clients. Java’s exception-handling mechanism gives servlets a head start on security and reliability. But what about exceptions that are encountered in nonJava programs with which the servlet under test must interact? What sorts of tests should you perform? Some time ago, I encountered this situation: The servlet under test was integrated with a legacy source-code control system written in C++. During the early stages of testing, we encountered problems where the servlet’s container (Tomcat) simply exited with an error mes-
sage of “Exception outside of JVM.” As you can imagine, this error message wasn’t really helpful in debugging the root cause of the problem. We added debugging statements to the servlet to let us better track what was happening before the exception was encountered. This additional debugging information, coupled with improvements in newer versions of the servlet container (we were a little out of date) let us isolate the failing function within the code with which the servlet was integrated. The problem we had to deal with was that the operation we were attempting to perform within the legacy product was failing. If we attempted the same operation via that legacy product’s commandline interface, we received an error message. The lesson here is that your tests for the servlet should include forcing errors within the other products/code with which the servlet is integrated. The errors should be trapped by the other product and gracefully passed back to the servlet and its client. Longevity and Scalability Another characteristic that sets servlets apart from other types of programs is that, unlike programs that you start up, run for a particular purpose, and then shut down, servlets are intended to run unattended for extended periods of time. Once a servlet is instantiated and loaded by its container, it remains available to service requests until it is either made unavailable or until the servlet container is stopped. Accordingly, you should include specific longevity-related classes of tests in planning tests for your servlet. For servlets, the types of integrations and integration tests you’ll want to consider include finding memory leaks and verifying scalability. One of the most useful features of the JVM is its garbage collector, which analyzes memory to find objects that are no longer in use and deletes them to free up memory. In spite of the garbage collector’s best efforts, however, you can still have memory leaks in Java programs. These can be especially problematic for servlets because the leaks can grow while the servlet runs. How does the garbage collector determine if an object is in use? It looks for objects that are no longer referenced. A running Java program contains threads. Each thread executes methods. Each method references objects via arguments or local variables. Collectively, these references are referred to as a “root” set of nodes. The garbage collector works through the objects in each node, the objects referenced in each node, the objects referenced by those nodes, and so on. All other objects
30
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
on the heap (that is, memory used by all JVM threads) are tagged as unreachable and are eligible for garbage collection. Leaks occur when objects are reachable, but are no longer in use. What sorts of tests should you perform? In contrast to short-term functional tests, in this case, you want to verify long-term results. In effect, you’ll want to simulate real-world conditions where the servlet under test will have to function 24/7. So you’ll have to establish a level of traffic between test clients and the servlet. There are several tools (such as the IBM/Rational Robot testing tool) available to simulate user sessions. The client, however, is not your main concern. How the servlet reacts to a regular, sustained level of incoming requests is: You can monitor memory usage in several ways. The easiest way to start is simply by using utilities supplied by the operating system. On Windows, the Task Manager lets you view total system memory in use and also the memory being used by any process. On UNIX systems, similar tools such as sar (on Solaris), vmstat (on BSD-based UNIX flavors), or top provide you with some basic measurements on memory usage. You’re looking for increases in memory usage by the servlet’s container to handle the client traffic, where the memory
is never released when the traffic is stopped. If you do locate a pattern or memory loss with these basic tools, then you can investigate further with more specialized tools such as IBM/Rational Purify testing tool to identify the offending classes/methods.
J2EE provides for policy-based, configurable, “fine-grained access” control The term “scalability” generally describes how software under test can respond to an ever-increasing level of incoming traffic. A more subtle type of scalability testing that is more closely related to memory usage involves how a servlet is able to respond to requests when the contents of those responses are affected by the longrunning nature of a servlet. What sorts of tests should you perform? You’ll want to examine and verify the mechanisms to deal with data that starts
small, but grows as the servlet runs. For example, I recently worked on a servletbased client-server application where one of the features supported by the client was the retrieval of log files that tracked operations executed by the servlet. Under functional testing, the retrieval worked without error. However, after the server/servlet had been running for several consecutive days, the lists of log files that the client could access, and the log files themselves, had grown so large that the servlet’s performance was severely degraded when it had to extract, format, and deliver log information on a large scale back to the client. The initial design of the servlet returned the entire set of requested log information in a single, large vector of text strings. Ultimately, the design was altered to return the information one page at a time. This resulted in multiple requests to the servlet, but each request only resulted in a small response. Conclusion To successfully test servlets, your tests have to take into account that as servlets, they must take advantage of the features supplied by Java and must live within the limitations of the servlet model. DDJ
The Jpydbg Debugger A debugger plug-in for Python Jean-Yves Mengant
T
he architectures of open-development environments such as Eclipse and JEdit are based on the concept of frameworks with plug- in components. While a number of Java debugger plugins have been released for these IDEs, there are only a few for Python. Consequently, I present in this article Jpydbg, a Python debugging plug- in for JEdit (http://www.jedit.org/). Jpydbg provides a graphical interface and shell for debugging Python programs. The debugger’s backend is implemented as a networking debugger, inheriting the standard dbd.py Python debugging kernel, while the graphical frontend is based on the Swing GUI interface. The complete source code and related files for Jpydbg are available electronically (see “Resource Center,” page 5) and at http://jpydbg.sourceforge.net/. Python Debugging Basics Of course, you may wonder why I just didn’t use Python’s basic pdb debugger instead of writing a new one. The reason is that pdb is character based (based on stdin/stdout), which makes it impractical (if not impossible) to integrate into a GUIbased IDE like JEdit. That said, both pdb and Jpydbg inherit from the bdb.py class, which is central to any Python debugging tool. Moreover, the bdb.py class makes it possible for debuggers to acquire necessary information about the program being debugged. Central to this is tracefunc, a callback function (part of the C Python Jean-Yves is CTO of SEFAS, a French software company where he works on development tools and computer languages. He can be reached at [email protected]. 32
kernel), which serves as a default entry point for debuggers; see Example 1. The important parameters of tracefunc are: • self, the current bdb instance. • frame, the current running Python frame instance. • event, the debugging event type at the origin of current callback. • arg, which represents complementary information depending on event populated type. The Jpydbg child class methods dispatch_line, dispatch_call, dispatch_return, and dispatch_exception take precedence over the bdb methods called by the bdb dispatcher. Once the Python interpreter returns control, you can begin collecting interesting debugging information — source file location of the running script, current source line in progress, global variables, local variables, and the like. Of particular interest is the Python frame structure, one of the parameters of the callback function (Example 1). The frame structure gives you the current Python execution context in progress. For details on the frame structure, refer to frameobject.h and frameobject.c (part of the Python kernel). Finally, Jpydbg uses the standard sys.settrace Python function to activate/deactivate the callback when needed. To activate the callback, Jpydbg uses sys.settrace(trace_dispatch), the candidate method object to activate; to deactivate, it uses None. Jpydbg Design Again, Jpydbg is a client/server, TCP/IP daemon-based debugger. For maximum flexibility, at startup Jpydbg either listens on a dedicated port waiting for incoming debug frontend solicitors, or connects back to a listening remote debugger frontend when hostname and ports are provided as command-line arguments. The latter strategy is better for remote debugging, and works even if there’s a firewall between the debugger and debuggee. Of course, I Dr. Dobb’s Journal, August 2004
could have based Jpydbg on nonTCP/IP protocols such as DLLs. However, an advantage to the TCP/IP-based approach is the clean process isolation between the debugger and debuggee. Among other things, this clean isolation gives you the capability of writing the debugger GUI frontend in a different language or platform. It also reinforces compliance with Heisenberg’s principle, which refers to how complex it is to measure and observe a system (the debuggee) without the observation (the debugger) disturbing the system and changing its behavior. As Figure 1 illustrates, on the client side the Jpydbg frontend (which is written in Java) sends text-based, commandline requests containing optional arguments. On the server side, the backend debugger (written in Python) parses the received requests and sends the results and the debugger’s events back in XML format over the TCP/IP session. As you can see in Figure 2, Jpydbg’s XML DTD is straightforward. Likewise, the jpydaemon.py structure (available electronically) is also straightforward. The initial part consists of setting up a TCP/IP-based socket protocol and entering a message network wait loop, which waits for commands to come in. There are two kinds of initial commands that may come in: • A Python shell command, which is executed like a standard character-mode Python shell. The only difference is that the command stdout is sent back in XML format over the wire. • A debugging startup request command, which is included with the Python module and optional program arguments. This command makes jpydaemon.py enter debugging mode, which then launches the debuggee through the inherited bdb.py run method. Debugging callbacks are set by the run method before activating the debuggee. Once a debugging event (say, a new Python line entered or breakpoint reached) http://www.ddj.com
(continued from page 32) is triggered, the Python kernel gives control to one of the overridden jpydaemon.py debugging event-listener methods —user_call, user_return, user_line, or user_exception. Globally, there are only a couple of difficulties you face during implementation: One is taking care of mangling XML reserved words and character tokens inside a debugging message; another is capturing/dispatching exceptions, because exceptions such as SystemExit, SyntaxError, NameError, or ImportError need to be captured and handled on the backend side to generate specific debug events. Actually, it is a bit of a misnomer to say at this point that the Jpydbg GUI frontend
is a “Java frontend.” Before calling it that, you need to integrate the debugger and IDE, such as Eclipse (using SWT) or JEdit (using JFC/Swing). The ClientDebuggerShell.java class (available electronically) is included for debugging XML debugging-generated messages via the quick and dirty Debug Rough Command Interface; see Figure 3. Note that this utility class is not usable as a Python debugger in itself, and only debugs XML backend information. The JEdit Plug-In Jedit is a freely available GNU crossplatform IDE/editor that provides an extension plug- in API. JEdit- supported components can be downloaded and
def trace_dispatch(self, frame, event, arg): if self.quitting: return # None if event == 'line': return self.dispatch_line(frame) if event == 'call': return self.dispatch_call(frame, arg) if event == 'return': return self.dispatch_return(frame, arg) if event == 'exception': return self.dispatch_exception(frame, arg) print 'bdb.Bdb.dispatch: unknown debugging event:', 'event' return self.trace_dispatch
Example 1: The tracefunc callback function. JPyDbg Backend
JPyDbg Frontend
Bdb.py Standard Python Debugging Module
JPyDbg XML Client-Side Frontend
Jpydaemon.py JpyDbg Server-Side Daemon
IP n CP atio P L nic XM mu er m ay Co L
JPyDbg IDE Plug-in Frontend Browing Panel
Figure 1: Jpydbg design. <JPY> Encloses a debugger message transmission <JEXCEPTION> detailed formatted exception information <STDOUT> Debuggee's stdout captured information <STACKLIST> <STACK> Python current stack information Elementary variable content Returning back from callee message Step inline message detail Entering subprogram message <WELCOME> Debugging session start ack message Debugging session termination ack message
Figure 2: Jpydbg’s XML DTD. 34
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
Figure 4: Jpydbg debugging the Python Zope kernel. Figure 3: JEdit plug-in. (continued from page 34) automatically integrated into JEdit without leaving the IDE. The way plug-ins are integrated within JEdit is a model of simplicity and robustness. In general, GUI components for JEdit are built using the standard Java Swing interface and bundled into a separate package — org.jymc.jpydebug.swing.ui. This package contains the main debugger’s Frame container and associated tab panels, which represents
36
the foundations of the debugger’s interface. This package also represents a reasonable isolation for the GUI layer from the JEdit IDE plug-in semantics. A second package of classes manages the JEdit plug-in interaction with the debugging layer. Figure 4 shows the Jpydbg frontend and the XML jpydbg.py layer actively debugging the Python Zope kernel. In addition to the debugging layer, the full plug-in contains a Python ClassBrowsing tree and Python Syntax checker.
Dr. Dobb’s Journal, August 2004
Conclusion Relying on and inheriting the bdb.py kernel debug library makes it possible to have reliable software and new tools like Jpydbg. Moreover, the Python jpydbg.py module can be integrated inside the standard Python library module to provide a full network-based XML library fully usable to implement independent robust GUI client-side debugging environments. DDJ
http://www.ddj.com
Pseudorandom Testing Test harnesses & verifiable datasets Guy W. Lecky-Thompson
P
art of the process involved in taking a software project from planning to completion is geared toward testing the code that makes up the core of the application. There are numerous tools that you can use to test the user interface (WinRunner comes to mind), which let scripted test cases be executed to simulate the actions of users. This is well and good, and sophisticated sets of tests can be created that let most bugs in the UI design be caught relatively efficiently. This efficiency stems from the principle that the test tool can simulate user actions without the actual user being present. An additional bonus is that it does not get bored, and can run all day and all night — something you can’t expect of a human. However, the drawback is that you have to wait until the UI interface is ready before the testing can commence. Often, the inner workings of the application, the libraries that are to be used, and the functionality that provides the data processing are ready before the user interface. This means that programmers and test teams need to devise methods of validating these core parts of the system prior to releasing them as part of the final package. Failure to do this results in an incredible amount of inefficiency as bugs Guy is a video game design consultant and author of Infinite Game Universe (Charles River Media, 2002). He also conducts research into the use of pseudorandom and genetic algorithms in software applications. Guy can be contacted at http://www.lecky-thompson.net/. 38
are only found late in the development process. Part of the problem can be resolved by using rigorous design methodologies and principles for determining that the software should be correct, but you’re never sure that the programmers will capture the exact nature of the proven design in the implementation. What’s required is a test harness that offers the functionality of a UI test tool, but can be used in conjunction with the libraries that make up the core processing functionality of the application. Because these will be entirely nonstandard and change from application to application, a generic tool is difficult to envision. The success of tools such as WinRunner is based on the fact that a GUI such as that offered by Microsoft Windows is composed of standard pieces. There are buttons, text boxes, menus, and the like, which can each be tested in a generic fashion, as well as in conjunction with each other. Microsoft has done a lot of work to ensure that this remains the case, and the various exposed parts of the GUI elements (such as window handles) are well documented. Clearly, this will not be the case for the various data objects that make up the core functionality of the application under test, as well as their associated methods. The Solution: Part I Using the release of the SHA-1 algorithm (which provides secure hashing functionality for IDEA and Triple DES encryption) as an example, you can see that this is a problem that has been addressed before, using specific datasets with known inputs and outputs. The driving force behind the test is that, once the algorithm has been implemented, you can provide it with a test dataset and compare the result with the known outputs. If they match, then the algorithm has been correctly implemented, according to the design, and everyone is happy. Dr. Dobb’s Journal, August 2004
However, there will be cases where the input is unpredictable, and the output should match it, or be within a set of parameters, and no strict dataset can be created, which will enable such a philosophy to be used. Data storage and reproduction is a classic case, and it is this that I address first. Assume that a class that represents a specific data object needs to be implemented, in this case strings and sequential identification numbers for an employee database. Listing One is a possible C++-style implementation of the object. The contents of Listing One lead nicely into a slight digression. It turns out that the most effective way of testing such examples is to pass the header files, along with precompiled libraries, to a separate team of people who usually haven’t had a hand in creating the actual implementation. From the design and the header file, it should be obvious how the object is supposed to behave, and can hence be tested. In this case, the test team might look at the header file and determine that the most effective way to test the implementation would be to create an instance of the CEmployee object, instantiated with known values, and use the data access methods to verify that the correct values have indeed been stored. Consequently, the test team might come up with a test harness like Listing Two. If there is no output on the screen once this little test harness has been executed, then you can assume that the implementation is correct and the quality check has been passed. You must also assume that the functions that you have chosen to use for data verification, in this case the strcmp function from the Standard ANSI C Libraries, and the nonequality operator have been tested beforehand. At this point, if you are actively involved in software testing, you’re probably rolling your eyes in disbelief — no, I haven’t forgotten exception cases, I just haven’t mentioned them yet. http://www.ddj.com
(continued from page 38) An exception case is one that is assumed to use data that is illogical or incorrect. Thus, you might want to introduce checks along the lines of Listing Three. Whether these kinds of test cases make sense depends on the library as well as target application, but the techniques that we are about to introduce can be augmented by the use of exception cases to provide a more complete test harness. They are simply not my focus here. The Solution: Part II The preceding test cases were static, using a single set of values. This might seem like enough, but there will be cases where the test team might want to perform some stress or bounds checking to ensure absolute stability, or even just to see where the limits of the system might lie. In such cases, it would be tedious to manually create sets of test data to verify these aspects of the system. Imagine trying to test for the largest possible name that the CEmployee class will support, or whether the entire ANSI character set is supported, using a hand-coded test harness. Instead, you would probably prefer to generate the test data, apply it to the object being tested, and verify the correctness of the implementation that way. This
is not actually as daunting as it might at first seem, but there is an obvious and less obvious solution. First, the obvious one. For string testing, you need to verify that every printable character supported by the standard ANSI character set can be stored and retrieved. As luck would have it, you do not even need to know what these are on the target platform, since the ctype.h standard header file should contain a function (or macro) isprint (int c), which returns True if c is printable. Leaving out the actual test, which is identical to that in Listing Two. Listing Four shows this technique in action. There is a drawback to using this approach, which becomes apparent when you consider the next phase in the testing process — variable-length, variablecontent strings. You might be tempted to try and encapsulate the aforementioned process into a function CreateRandomString, which would take a variable of type char ∗ and fill it with random printable characters. This function might look akin to Listing Five and admittedly would do the job, and could be used as in Listing Six. This seems perfectly adequate, and probably is. Since the standard ANSI random-number generator can be seeded to produce a stream of repeatable values, it can even be described as provid-
ing static datasets of arbitrarily long strings filled with printable characters. The problem is that it might not be portable — each platform might generate different streams of characters. What is more, the use of malloc to create the memory block introduces a variable that cannot be verified prior to being initialized, which adds to the uncertainty of the test harness. A Better Way The second, less obvious solution requires patience in understanding. If you assume that the isprint function only provides visible printable characters, leaving out control characters, then surely you can combine the two methods to create a set of static test data containing elements that can be set to known, verifiable values prior to compiling the test harness. This intriguing thought needs two separate pieces of software. The first creates the second, which is the test harness proper. Listing Seven shows the complete application to create the test harness for verification of a single set of random values for the CEmployee string handling. A similar approach can be used to test the long integer handling. Even better than setting up the test data variable is the slightly more convoluted version: // Print out the test process fprintf( hOut, "CEmployee ∗ OEmployee = new CEmployee ( \"%s\", 1 )\n", test_data );
Using this version also requires a similar change to the test process itself: fprintf( hOut, "if ( strcmp ( OEmployee->GetName(), \"%s\" ) != 0)\n", test_data );
The principles need to be extended, of course, to provide for multiple datasets within the same test harness file; judicious use of the malloc function in conjunction with the two small alterations to Listing Seven can be employed to provide a much larger dataset than we have seen here. It is better than the first, more obvious method because it lets the test team look at the source code, and knows exactly what values are being used to populate the data contained in the OEmployee object — the test data is more transparent. For this reason, you can use nonstatic variables in the application that creates the test harness (such as char ∗) and still be able to verify the data passed to the constructor. This is something you could not do with the original solution. Conclusion In this article, I have focused on ways in which objects can be tested by creating specific test harnesses with verifiable 40
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
comes clearer as soon as that class begins to be extended. You could, for example, simulate data entry of many hundreds of thousands of records, to be held together by a linked list, with little additional extra coding. Although this could also be done using a tool such as WinRunner, these techniques can be used incrementally, before
datasets in a much more efficient manner than hand coding them. Of course, the intermediate stages need also to be tested, but since the result is to be compiled, and can be reviewed by programmers, there is much less scope for error. While it can seem slightly over complicated for the simple example class I presented, the power of the technique beListing One
the interface is complete. The advantages are twofold. First, you catch errors earlier in the application creation process and, second, you can test code at the core of the application before it becomes further complicated with other libraries, such as the user interface or file handling. DDJ
CreateRandomString ( lSize, szText ); szText[lSize] = '\0';
class CEmployee { private: char * szName; long int lID;
Listing Seven
public: // Constructor CEmployee ( char * szName, long int lID ); // Destructor ~CEmployee ( ); // Inline Data Access Methods char * GetName() { return this->szName; } long int GetID() { return this->lID; } };
Listing Two #include <stdio.h> // Standard ANSI C I/O #include <string.h> // Useful for string comparisons #include "CEmployee.h" // The object class to be tested int main ( void ) { // Instantiate the test data char test_name [] = "This is a test name. 123. ABC."; long int test_id = 12345; // Create an instance of the CEmployee object, using the test data CEmployee * OEmployee = new CEmployee ( test_name, test_id ); // Check that the data has been correctly stored if ( strcmp( test_name, OEmployee->GetName() ) != 0) printf("Name test failed!\n Expected [%s] but found [%s]", test_name, OEmployee->GetName() ); if ( test_id != OEmployee->GetID() ) printf("ID test failed!\n Expected [%ld] but found [%lld]", test_id, OEmployee->GetID() ); // Clean up... delete OEmployee; return 0; }
#include <stdio.h> #include <stdlib.h> #include "RandomString.h" // For the CreateRandomString function void main ( void ) { char szFileName[] = "CEmployeeTest.cpp"; FILE * hOut = fopen( szFileName, "w" ); // Open file for writing // Create the preamble for the test harness fprintf( hOut, "#include <stdio.h>\n\n #include \"CEmployee.h\"\n\n" ); fprintf( hOut, "void main ( void )\n{\n" ); // Set up the test_data char * test_data; long int lSize; lSize = rand() % (MAX_LONG_INT / 2); lSize += rand() % (MAX_LONG_INT / 2); test_data = (char *) malloc ( (lSize + 1) * sizeof ( char ) ); CreateRandomString ( lSize, test_data ); // Print out the dataset fprintf( hOut, "char test_data[] = \"%s\"\n", test_data ); // Print out the test process fprintf( hOut, "CEmployee * OEmployee = new CEmployee ( test_data, 1 )\n" ); fprintf( hOut, "if ( strcmp ( OEmployee->GetName(), test_data ) != 0)\n" ); fprintf( hOut, "\tprintf(\"String test failed!\\n\")\n"); fprintf( hOut, "}\n" ); fclose ( hOut ); }
DDJ
Listing Three CEmployee CEmployee CEmployee CEmployee
* * * *
OEmployee OEmployee OEmployee OEmployee
= = = =
new new new new
CEmployee( CEmployee( CEmployee( CEmployee(
"", 1 ); 1, "" ); "Test", -1 ); "Test", 3.5 );
// Illogical, Empty name // Incorrect parameters // Illogical ID // ID wrong type
Listing Four ... char test_name[255]; // This array is larger than it needs to be // Generate the test_name data, containing every printable ANSI character int pos = 0; for (int j = 0; j < 255; j++) { if ( isprint(j) ) { test_name[pos] = j; pos++; } } test_name[pos] = '\0'; // Just in case CEmployee * OEmployee = new CEmployee ( test_name, 1 ); // For example ...
Listing Five void CreateRandomString ( long int nLength, char * szText ) { long int j; j = 0; while ( j < nLength ) { int c = 0; while ( !isprint ( c ) ) { c = rand() % 255; } szText[j] = c; j++; } }
Listing Six char * szText; long int lSize; lSize = rand() % MAX_LONG_INT; szText = (char *) malloc ( (lSize + 1) * sizeof(char) );
http://www.ddj.com
Dr. Dobb’s Journal, August 2004
41
Performance & System Testing Automating the data-collection process Thomas H. Bodenheimer
A
s a performance tester of enterprise software solutions, I have to monitor and collect a large amount of performance data from a variable number of systems. The enterprise software I test involves distributed components running on a wide range of hardware and operating systems. Since these tests can involve anywhere from four to hundreds of actual computers, I need to automate the collection of performance data as much as possible. On Windows machines, I had been using a combination of Perfmon and Logman (both included with Windows) to gather the performance data during test runs. But setup of these programs involved a manual process of selecting those devices for each physical system I wanted to report on. Some systems are simple, single processor, single disk, desktop-style boxes while others are multiprocessor, multiple hard- drive, highpowered servers. My problem was that I needed to eliminate the manual configuration involved with Perfmon and Logman of these physically different machines while still collecting system-specific data for all machines. I also wanted a simple command-line way to start and stop data collection. Since I often use Perl to analyze the data and generate summary reports for all machines used in a Tom is a software engineer with IBM-Tivoli where he has been a member of various test organizations and specializes in software performance engineering. He can be contacted at [email protected]. 42
test, the collected data needed to be in a text format. I developed a solution that leverages the Microsoft Performance Data Helper (PDH) library. By using the PDH library, a simple, single program automatically monitors a wide range of different hardware and collects performance data on all the system devices. The PDH library lets me monitor all the physical — processor, hard disk, and network— activity during tests. By wrapping this function into a Windows service, I gained the commandline start/stop ability that I desired. My primary job responsibilities are to run the actual tests; thus, I have limited time to dedicate to tool development. Using the PDH library in a Windows service, I minimized the amount of time I spent getting the tool up and running. The Microsoft Performance Data Helper Library The underlying performance monitoring function used in Perfmon and Logman is provided by the Microsoft Performance Data Helper (PDH) library. The basic function of the PDH library is contained in the PDH.dll and available to C/C++ developers by including the pdh.h and pdhmsg.h libraries in their code. I recommend that you download and use the latest version of the Microsoft Platform SDK to take advantage of all the functions available in recent versions of the PDH library. The basic abstraction of performance monitoring in Windows is the counter. The MSDN online documentation defines a counter as “a performance data item whose name is stored in the registry.” Performance object counters are defined for physical components such as processors, disks, and network interfaces, while system object counters are defined on processes, threads, and other operating-system objects. Counters are grouped together in queries so that they can all be collected at the same time. Counters are defined by their name and path. Example 1(a) shows the basic Dr. Dobb’s Journal, August 2004
format of this path, while Example 1(b) presents examples of actual counter paths. As you can see, not all counters have a ParentInstance or InstanceIndex. There are wildcard functions available that will take a basic path and expand it to match those counters available on a specific system. Example Counter Strings I played around with several different string formats when trying to find the counter path names for the different physical object counters I wanted to monitor. Referring again to Example 1(a), the first thing I found was that I didn’t really need to include the Machine name part of the string. Also, I initially struggled with trying to figure out how to use the ParentInstance/ObjectInstance#InstanceIndex part of the counter path. Different counters may or may not have all of these parts of the counter paths. For example, the processor counters will have counter path names like: \Processor(0)\% Processor Time
What I found was that I could simply use the wildcard symbols in the counter path for all of the ParentInstance/ObjectInstance#InstanceIndex counter paths for any counter. Thus, I can just use: \Processor(*/*#*)\% Processor Time
And I can use the (*/*#*) wildcard syntax for every counter. The PDH wildcard expansion functions correctly expand out the names for me and I get all possible counter path name matches returned. Check out the included source code for examples of this. The Final Design For the PDH Component I created a simple class, PerfLogger (see Listings One and Two), that follows the necessary procedure to use the PDH library counters for performance data logging. Those steps are: http://www.ddj.com
1. Create a query using the PdhOpenQuery function. 2. Add counters to the query. (a) Generate an array of counters using either the PdhExpandCounterPath or PdhExpandWildCardPath functions. (b) Add counters to the query with the PdhAddCounter function. 3. Open a logfile using the PdhOpenLog function. 4. Start logging data to the logfile using the PdhUpdateLog function. 5. Stop logging and close the logfile with the PdhCloseLog function. Writing the PerfLogger class was quick and required a limited amount of coding. This class pulls out all the processor, physical disk, memory, and network interface statistics I want to collect during tests. It handles all the Windows machines in my testbed regardless of the physical hardware differences. The only arguments needed in its constructor are a full path and file name for the logging file, and the integer number of seconds between queries to the counters. The design allows some flexibility for other team members to use the PerfLogger class while putting the log file into any directory and using a different time interval between measurements.
http://www.ddj.com
Creating a Windows Service for the PerfLogger After creating the PerfLogger, all the function I needed for the actual logging was in place. But I still needed a simple command-line way to start and stop the logging. I had previously used the net start <service name> and net stop <service name> commands to start and stop Windows services from a command line. This seemed to offer all the function I needed and the steps to create a Windows service were well documented in the Microsoft Platform SDK. All services are managed by the Service Control Manager (SCM) in the Windows OS. The SCM is started at system boot-up and is a remote procedure call server. It is responsible for maintaining a database of installed services, starting services at system boot-up or on demand, and transmitting control messages to running services. There are five main interaction points with the SCM in creating and running a Windows service that developers work with: • One point is during the service installation. This requires connecting to the SCM on the machine and opening the SCM database. The OpenSCManager function is used to do this. Next, you need to use the CreateService function
Dr. Dobb’s Journal, August 2004
to register your executable with the Service Control Manager. The CreateService function lets you set the location of the executable for the service, the name of the service, the context of the service (for example, running as its own process or as a shared process), and other SCM database values for the service. • The program should have a normal main method that has two major steps. It should create a SERVICE_TABLE_ENTRY that contains the name of the service and the ServiceMain function used to start the service. The next step is to call the StartServiceCtrlDispatcher function with the SERVICE_TABLE_ENTRY as the argument. This starts the control dispatcher thread, which loops and waits for control messages for the service. • The SCM calls the ServiceMain function to start your service. This function is analogous to the normal main function of a program. You must implement the ServiceMain function in your code. The first task in ServiceMain is to call the RegisterServiceCtrlHandler function to register the method that will handle the messages sent to the service by the SCM. At that point, any code that the service will execute is inserted. I have a while loop that creates an instance of my PerfLogger class, finds the performance
43
counters on the machine, and starts logging the performance data. • Another interaction point is the method that is registered during the RegisterServiceCtrlHandler call in ServiceMain. This function contains a switch statement so that the SCM can pass any control messages to the service. For my service, the only thing I needed to handle was the stop request. When receiving a stop request, a pointer to the PerfLogger instance from the ServiceMain function is used to stop the performance logging. It also sets the Boolean value in the while condition of the ServiceMain loop to False to stop that loop. • The last point is during deletion of the service from the Service Control Manager. Connect to the SCM and open the SCM database with the OpenSCManager function. Then call the DeleteService function to remove the service from the SCM database.
This defines the basic steps and functions required to create a Windows service that can be started from the command line. Results and Other Applications Using the PerformanceLogger service (available electronically; see “Resource Center,” page 5) I created with these methods has simplified my performance and scale testing of enterprise software. (Also available electronically are a sample configuration file and sample output file.) I install the service on any Windows machine that is included in my test environment and I’m ready to collect the basic performance data I need. This shortens my preparation and setup time considerably. Automation is important to any software test effort that involves multiple machines and this tool removes previously necessary manual configuration steps. There are potential applications of the PDH library and Windows services that
(a) \\Machine\PerfObject(ParentInstance/ObjectInstance#InstanceIndex)\Counter (b) \\MACHIN1\Processor(0)\% Processor Time \\MACHIN1\Processor(1)\% Processor Time \\MACHIN1\Processor(_Total)\% Processor Time \\MACHIN1\PhysicalDisk(0 C:)\% Disk Time
Example 1: (a) Basic format of a path; (b) actual counter paths.
44
Dr. Dobb’s Journal, August 2004
could provide extra value to Windows administrators. Using a service like I’ve outlined could provide a cheap monitoring solution for Windows machines. You could potentially use the PDH library to build performance monitoring in programs to provide autonomic adjustment of system resource usage. Both the PDH library and the Windows services API offer quick development of test and monitoring tools. Testers can use the methods I’ve outlined to create test tools that address different aspects of the product testing. Play around with the example code and modify it to fit your environment. References Braithwaite, Kevin. Custom Performance Analysis Using the Microsoft Performance Data Helper; IBM WebSphere Developer Technical Journal; http:// www-106.ibm.com/developerworks/ websphere/techjournal/0310_braithwaite/ braithwaite.html. Anish, C.V. Creating a Windows NT/ Windows 2000 Service; Microsoft February 2003 Software Development Kit (SDK); http://www.codeguru.com/Cpp/WP/ system/ntservices/article.php/c5701/. DDJ
http://www.ddj.com
Listing One #include "stdafx.h" #include #include #define INITIALPATHSIZE 2048 class PerfLogger{ char logFile[512]; int intervalBetweenMeasurements;//in milliseconds HQUERY hQuery; HLOG phLog; DWORD logType; BOOL logging; public: PerfLogger(); PerfLogger(char* logFileName, int interval); int findAndActivatePerfMetrics(); void startPerfLog(); void stopPerfLog(); private: PDH_STATUS getAllMetricsFor(char *wildCardPath); };
Listing Two #include "PerfLogger.h" PerfLogger::PerfLogger(){ } PerfLogger::PerfLogger(char* logFileName,int interval) { strcpy(logFile,logFileName); /* The logType defines what type of log will be used for output. Since I use Perl to often summarize data, a comma separated value file is what I wanted. CSV log files are one of the options, so I was in business. */ logType = PDH_LOG_TYPE_CSV; /* We open a PDH query in the constructor - we'll add counters to it later. By having all our counters in one query, whenever a snapshot of the counters is taken, all the counters are sampled at that same time. */ PdhOpenQuery(0,0, &hQuery); /*
I needed to sample the counters for some integral number of seconds. The actual argument is in milliseconds, so we multiply by 1000. */ intervalBetweenMeasurements=interval * 1000; /*
// Check for a too small buffer. if (pdhStatus == PDH_MORE_DATA) { dwCtrPathSize++; GlobalFree(szCtrPath); szCtrPath = (LPSTR) GlobalAlloc(GPTR, dwCtrPathSize);; pdhStatus = PdhExpandWildCardPath(NULL,szWildCardPath, szCtrPath, &dwCtrPathSize,NULL); } // Add the paths to the query if (pdhStatus == PDH_CSTATUS_VALID_DATA) { LPTSTR ptr; ptr = szCtrPath; while (*ptr) { pdhStatus = PdhAddCounter(hQuery,ptr,0,&phcounter); ptr += strlen(ptr); ptr++; } } else printf("PdhExpandCounterPath failed: %d\n", pdhStatus); return pdhStatus; } /* Since eventually the PerfLogger class will be used as a service, I just start logging in an infinite loop - I'll count on the Windows Service API to allow me to stop logging by updating the value of the boolean. */ void PerfLogger::startPerfLog() { logging = TRUE; PDH_STATUS pdhStatus; // Open the log file for write access. pdhStatus = PdhOpenLog (logFile, PDH_LOG_WRITE_ACCESS | PDH_LOG_CREATE_ALWAYS, &logType, hQuery, 0, NULL, &phLog); // Capture samples and write them to the log. while(logging) { pdhStatus = PdhUpdateLog (phLog, TEXT("Some Text.")); Sleep(intervalBetweenMeasurements); // Sleep between samples } // Close the log and the Query pdhStatus = PdhCloseLog (phLog, PDH_FLAGS_CLOSE_QUERY); } // Just to stop the logging loop void PerfLogger::stopPerfLog(){ logging = FALSE; }
DDJ
A boolean value to help keep track of when logging should be ongoing or not.
*/ logging = FALSE; } /*
A member to allow us to find the subset of individual machine counters I'm interested in logging. For me, I wanted the % Processor Time for all processors. % Disk Time for all disks % Disk Read Time for all disks % Disk Write Time for all disks Available Mbytes of memory during the monitoring period. Bytes Received/Sec for all network interfaces Bytes Sent/Sec for all network interfaces Althought it's bad practice, I don't check the return code status when wildcarding through the counters. So far, it hasn't caused me any problems. */ PerfLogger::findAndActivatePerfMetrics(){ char wildCardPath[256]; PDH_STATUS pdhStatus; // Use the counter path format without specifying the computer. // \object(parent/instance#index)\counter strcpy(wildCardPath,"\\Processor(*/*#*)\\%% Processor Time"); pdhStatus=getAllMetricsFor(wildCardPath); strcpy(wildCardPath,"\\PhysicalDisk(*/*#*)\\%% Disk Time"); pdhStatus=getAllMetricsFor(wildCardPath); strcpy(wildCardPath,"\\PhysicalDisk(*/*#*)\\%% Disk Read Time"); pdhStatus=getAllMetricsFor(wildCardPath); strcpy(wildCardPath,"\\PhysicalDisk(*/*#*)\\%% Disk Write Time"); pdhStatus=getAllMetricsFor(wildCardPath); strcpy(wildCardPath,"\\Memory(*/*#*)\\Available MBytes"); pdhStatus=getAllMetricsFor(wildCardPath); strcpy(wildCardPath,"\\Network Interface(*/*#*)\\Bytes Received/sec"); pdhStatus=getAllMetricsFor(wildCardPath); strcpy(wildCardPath,"\\Network Interface(*/*#*)\\Bytes Sent/sec"); pdhStatus=getAllMetricsFor(wildCardPath); return 0; } /*
This member actually takes a counter path with wildcards and uses the PdhExpandWildCardPath to get all the matching counter paths. It then adds these expanded paths to the query we created in the constructor.
*/ PDH_STATUS PerfLogger::getAllMetricsFor(char *WildCardPath){ LPSTR szCtrPath = NULL; char szWildCardPath[256] = "\000"; DWORD dwCtrPathSize = 0; HCOUNTER phcounter; PDH_STATUS pdhStatus; sprintf(szWildCardPath, WildCardPath);//works // First try with an initial buffer size. szCtrPath = (LPSTR) GlobalAlloc(GPTR, INITIALPATHSIZE); dwCtrPathSize = INITIALPATHSIZE; pdhStatus = PdhExpandWildCardPath(NULL,szWildCardPath, szCtrPath, &dwCtrPathSize,NULL);
http://www.ddj.com
Dr. Dobb’s Journal, August 2004
45
Optimizing Pixomatic for x86 Processors: Part I Challenging assumptions about optimization Michael Abrash
F
or the first time in a while, I recently happened to leaf through my Graphics Programming Black Book (http:// www.ddj.com/downloads/premium/ abrash/), where one of the first things I noticed was the phrase “Assume Nothing.” And that made me think that maybe I should follow my own programming advice more often… You see, a couple of months ago, Jeff Roberts (my boss at RAD Game Tools) asked me to take a look at the low-level MP3 filter code in the Miles Sound System, an SDK that handles all aspects of sound —3D audio, digital audio, digital decompression, interactive MIDI, and the like. This code was performance critical, so much so that it was written entirely in assembly language. Nonetheless, Jeff wanted me to see if I could find a way to speed it up further. In particular, he thought perhaps the code would benefit from Streaming SIMD Extension (SSE), the instruction set Intel introduced with the Pentium III. It had been a while since I had had a crack at pedal- to- the- metal x86 optimization (I’ve been working on PlaySta-
Michael is a developer at RAD Games Tools and author of several legendary programming books, including Graphics Programming Black Book. He can be contacted at [email protected]. 46
tion2) and I happily dove into the code. It is complex and intricate, and not obviously amenable to the four-way parallelism of SSE, but eventually I figured out a way to bring SSE to bear. With a day of coding, I managed to speed up the entire MP3 test application by about 10 percent. As it turned out, that 10 percent speedup was more impressive than it seems. When I finally got around to profiling, the lowlevel filter code turned out to have taken only about one-third of the total time before my optimizations, meaning that I had succeeded in speeding the low-level filter up by about 1.5 times. Not bad. However, profiling also revealed that an unrelated C routine was taking 40 percent of the time, considerably more than the filter code. Obviously, my assumption that the code in assembly was the most performance critical had caused me to start optimizing in the wrong place. Worse yet, most of the time in the C routine was taken by a tiny block of code, which did nothing more than call the library pow( ) function with integer exponents between 0 and 15. Worst of all, though, it turned out that the exponent was on most of the time. It didn’t take a whole lot of research or deep thinking to figure out a faster way to raise a number to the first power than calling pow( )! I converted this bit of code over to using a switch( ) statement to handle the various cases directly with multiplication (and to handle an exponent of one by doing nothing), all of which took about five minutes. In return, I got an overall speedup in the MP3 test of about 1.4×— four times what I had gotten from my heroic assembly language optimizations. “Assume Nothing” is probably the oldest, simplest, and least-specific optimization principle I’ve ever come up with. EvDr. Dobb’s Journal, August 2004
ery so often, however, I am reminded yet again that I ignore it at my peril. So as I examine a variety of optimization techniques in this article, remember that any technique is only as good as the assumptions — or, preferably, knowledge — that you base it on. Pixomatic In this three-part article, I discuss the process of optimizing Pixomatic, an x86 3D software rasterizer for Windows and Linux written by Mike Sartain and myself for RAD Game Tools (http://www .radgametools.com/). Pixomatic was perhaps the greatest performance challenge I’ve ever encountered, certainly right up there with Quake. When we started on Pixomatic, we weren’t even sure we’d be able to get DirectX 6 (DX6) features and performance, the minimum for a viable rasterizer. (DirectX is a set of low-level Windows multimedia APIs that provide access to graphics and audio cards.) I’m pleased to report that we succeeded. On a 3-GHz Pentium 4, Pixomatic can run Unreal Tournament 2004 at 640×480, with bilinear filtering enabled. On slower processors, performance is of course lower, but by rendering at 320×240 and stretching up to 640×480, then drawing the heads-up display (HUD) at full resolution, Unreal Tournament 2004 runs adequately well, even on a 733-MHz Pentium III. In the end, we exceeded our design goals. With Version 2.0, Pixomatic has a high-end DX7-class feature set (except it doesn’t support cubemaps) and low-end DX7-class performance, with peak 3-GHz Pentium 4 performance of more than 100 megapixels and nearly 5 million triangles a second. In this three-part article, I describe how we’ve managed to push Pixomatic as far as we have. http://www.ddj.com
While I won’t be talking about Pixomatic as a product, there is one product issue that I’d like to address. People keep asking: “Why would anyone want to use a software rasterizer nowadays?” (Actually, I believe the exact words they often use are: “Are you nuts?” But I’ll just go with the first version of the question.) It’s a good question, with a simple answer: Because Pixomatic is utterly reliable. There are no dependencies on APIs, drivers, or chips, so you can be absolutely certain that Pixomatic will work on any Windows or Linux machine with Intel’s Multimedia Extensions (MMX). The potential market is bigger, tech support is simpler, and returns are reduced. Ad Astra Per Aspera A good place to start is the story of how Pixomatic came to be a DX7-class rasterizer, and a good place to start with that is to digress a bit to discuss Norton Juster’s wonderful children’s book The Phantom Tollbooth. In it, Milo travels through the Kingdom of Wisdom, having many adventures and surviving many dangers to rescue the princesses Rhyme and Reason. As he does so, people keep telling him, “There’s just one thing about your quest,” but they won’t tell him what that thing is. Finally, after he has succeeded, a parade is held in his honor, and while riding in it, he asks the Kings exactly what that thing no one would discuss with him was. They inform him of the truth: His quest was impossible. The Kings add: “But if we’d told you then, you might not have gone — and, as you’ve discovered, so many things are possible just as long as you don’t know they’re impossible.” If only we’d had the Kings to guide us when we started to design Pixomatic! We weren’t sure whether we’d be able to get adequate performance, and after back-ofthe-envelope calculations, we figured we’d have to cut features to the bone to keep performance up. In addition, once we’d determined that we’d have to compile code on the fly, we were convinced that the overhead and complexity of supporting a lot of features would be too much. Consequently, we aimed for a DX6-class pipeline, with little more than two textures and Gouraud shading, and with modulation as the only operation. That was Pixomatic 1.0, and if its features were a little limited and didn’t map ideally to most current games, its performance was certainly good enough. Then customers started asking for DX7class features and we started patching new capabilities into Pixomatic. Naturally, because there wasn’t any overall design to these additions, they started to get messy and complicated. And then, http://www.ddj.com
one day, I realized that the problem was that I had assumed that a DX7-class feature set was impossible, and I hadn’t even taken a shot at it to find out if that was really true. It turned out that it was actually easier to refactor the code to a full, orthogonal DX7-class feature set than to patch in random features. Moreover, the performance was just as good as in Pixomatic 1.0. Everything worked great for Pixomatic 2.0; the only way it could have been better would have been if we had designed for DX7-class features from the start. (Clearly, for Pixomatic 3.0, all I have to tell myself is that a DX9-class software rasterizer isn’t impossible!) In short, everything worked out fine, and Pixomatic wound up with a nice, big, clean feature set.
Dr. Dobb’s Journal, August 2004
In 1995 – 97, I wrote a series of Dr. Dobb’s Sourcebook articles describing all the revelations and false starts we went through in designing and implementing Quake at Id Software. I’d love to do the same for Pixomatic, but the truth is it wasn’t that kind of project. This was at least the fourth 3D rasterizer I’d written, so there wasn’t a whole lot of aha! in the process; it was more a matter of matching our knowledge of the 3D pipeline to Pentium III and Pentium 4 hardware as efficiently as possible. Of course, that’s not to say it was easy, or that there weren’t any mistakes or learning experiences. There were plenty of both, as you’ll see, which isn’t surprising in a product with well over 1030 valid pixel-processing configurations. Still, it was a relatively
47
linear development process, at least compared to Quake. By the way, a number like 1030 may make you wonder how we managed to test Pixomatic. We constructed a testbed that implemented the pixel pipeline in C, along with a very-long-period randomnumber generator and the ability to run random configurations through both the C and Pixomatic pipelines and compare the results. Then we started it up and left it running for days at a time, churning through tens of billions of configurations a day. But obviously, it’s impossible to test every single possible configuration. C, Assembly, and More Pixomatic is implemented in several ways, as appropriate for the performance needs
48
of various parts of the rasterizer. Most of the code outside the pixel pipeline is in pure C, with inline assembly used in key places. For example, backface culling is done with inline assembly, among other reasons because, in C, there is no way to prevent gradient values from getting stored to memory before the backface test is performed, even though they only need to be stored if the backface test passes. The span generator emits perspectivecorrect horizontal spans 1 to 16 pixels long to the pixel pipeline and exists in three versions. The first is an all-C x87 version. The second is a part C, part inline assembly version that uses MMX and 3DNow (a set of MMX-like multimedia extensions that AMD has added to its processors). The third and fastest is an
Dr. Dobb’s Journal, August 2004
all-inline assembly version that uses SSE and MMX. The use of inline assembly improved performance by 20– 45 percent in various scenarios — and that’s overall performance, including time spent in the pixel pipeline, which I will discuss shortly. In addition to C and inline assembly, Pixomatic contains a fair amount of code that’s neither C nor inline assembly. The reason for this is the tremendous number of configurations Pixomatic has to support. For example, Pixomatic has to handle a huge number of possible pixel pipelines, as mentioned earlier; consider the number of available stage operations, sources, and scalings for both alpha and RGB, and then multiply by three stages. Or consider the number of possible vertex formats. Moreover, it’s not enough to process all these configurations correctly; they also have to be handled efficiently, which rules out using lots of tests and branches. When we started to design Pixomatic, it wasn’t obvious how best to get both complete feature coverage and high performance. One approach we used was to write a custom preprocessor to simply expand out all the permutations; for example, this is how the various 2D blts Pixomatic supports are implemented. Our overall experience with this approach was mixed; the preprocessor made it easy to generate permutations, but that code was then hard to debug because we had to work with the preprocessor output in the debugger. Worse, though, was that the preprocessor made it easy to generate a huge amount of code. Since the pixel pipeline involves a vast number of possible configurations, it was clear early on that the preprocessor wasn’t going to do the trick there. When we tried using the preprocessor to implement the relatively limited pipeline of Pixomatic 1.0, it generated several megabytes of code (that’s executable code — the source was even larger), way over our design target. (For comparison, the entire Pixomatic 2.0 DLL is about 250 KB in size.) A technique I had used back in the x486 and Pentium days was to thread together span processors that each performed one pixel processing operation; for example, one span processor to load the texels for a span into a temporary buffer, another to Gouraud shade them in the buffer, yet another to handle specular shading, and so on. However, my experience was that the loop overhead for each pass was fairly expensive even then; given how costly mispredicted branches are now and that at least one branch per pass is likely to mispredict, this did not seem like the right direction. The obvious next thought was to compile an optimized pixel pipeline on the http://www.ddj.com
fly, one that contained no extraneous instructions or overhead at all. There were two concerns with this. The first concern was whether the code would run slowly as a result of the CPU synchronizing its caches for the modifications. After all, code compiled on the fly is close to self-modifying code, which Intel specifically warns against as a performance hazard. However, tests showed that the time between the drawing primitive call (when compilation happens) and the pixel pipeline (when the modified code starts to execute) is more than long enough for the new code to propagate to the code cache, what with transformation, projection, clipping, culling, gradient calculation, and span construction all happening in that interval. The second concern was whether the overhead of compiling the code would be so large as to offset the benefits. Certainly, it would take some time to do the compiling, and that cost would be multiplied by the number of times rendering state changes forced a recompilation of the pixel pipeline. Initially, we assumed we would have to implement some kind of cache for compiled pixel pipelines, and we even exposed APIs to enable applications to do so if they wanted, but all this proved to be unnecessary for three reasons. • State changes are expensive with 3D hardware as well as with Pixomatic, so 3D apps are already written to minimize state changes. • We designed Pixomatic to batch up state changes until a triangle actually needed to be drawn, so we recompiled only when it was actually needed, rather than whenever a state changed. • We designed Pixomatic’s pixel pipeline so that the selection of a new texture doesn’t require recompilation. This last point is critical because while textures change frequently (often every few triangles with mipmaps), the remainder of the rendering state tends to remain in effect for dozens or even hundreds of triangles. Of course, none of this would have mattered unless the code could be compiled very quickly, so full-blown traditional compilers were out of the question. Instead, we wrote a streamlined compiler custom designed for the task, which we call the “welder.” Next Month In the next installment of this article, I’ll delve into welder and, among other topics, introduce an optimization tool dubbed “Speedboy.” DDJ http://www.ddj.com
Dr. Dobb’s Journal, August 2004
49
HTTP Response Splitting Dealing with a new, powerful intruder attack Amit Klein and Steve Orrin
I
n the process of developing AppScan (a security testing tool from Sanctum, the company we work for), we discovered a new attack technique called “HTTP response splitting.” This powerful and elegant technique has an impact on various web environments. HTTP response splitting enables various attacks, such as webcache poisoning, cross-user defacement, page hijacking of user information, and cross-site scripting (XSS). HTTP response splitting — and the attacks derived from it — is relevant to most web environments and is the result of an application’s failure to reject illegal user input; in this case, input containing malicious or unexpected characters — the CR and LF characters. An HTTP response-splitting attack always involves at least three parties:
Steve is CTO of Sanctum and participates in several working groups of the Internet Engineering Task Force (IETF) and Web Application Security Consortium (WASC). He can be contacted at [email protected]. Amit is Director of Security and Research for Sanctum. He can be contacted at aklein@ sanctuminc.com. 50
• The web server, which has a security hole enabling HTTP response splitting. • The target, which interacts with the web server perhaps on behalf of attackers. Typically, this is a cache server (forward/reverse proxy) or browser (possibly with a browser cache). • The attacker who initiates the attack. At the heart of HTTP response splitting is the attacker’s ability to send a single HTTP request that forces the web server to form an output stream, which is then interpreted by the target as two HTTP responses instead of the normal single response. The first response may be partially controlled by attackers, but this is less important. What is material is that attackers completely control the form of the second response from the HTTP status line to the last byte of the HTTP response body. Once this is possible, attackers realize the attack by sending two requests through the target. The first one invokes two responses from the web server, and the second request typically is to some “innocent” resource on the web server. However, the second request is matched by the target to the second HTTP response, which is fully controlled by attackers. Attackers, therefore, trick the target into believing that a particular resource on the web server (designated by the second request) is the server’s HTTP response (server content) while, in fact, it is some data that is forged by attackers through the web server — this is the second response. Dr. Dobb’s Journal, August 2004
HTTP response-splitting attacks occur where the server script embeds user data in HTTP response headers. This typically happens when the script embeds user data in the redirection URL of a redirection response (HTTP status code 3xx), or when the script embeds user data in a cookie value or name when the response sets a cookie. In the first case, the redirection URL is part of the Location HTTP response header, and in the second cookie setting case, the cookie name/value is part of the SetCookie HTTP response header. For instance, consider the JSP page (located in /redir_lang.jsp) in Example 1(a). When invoking /redir_lang.jsp with a parameter lang=English, it redirects to /by_lang.jsp?lang=English. Example 1(b) is a typical response (the web server is BEA WebLogic 8.1 SP1). As you can see, the lang parameter is embedded in the Location response header. In terms of mounting an HTTP response-splitting attack, instead of sending the value English, we send a value that makes use of URLencoded CRLF sequences to terminate the current response and shape an additional one. Example 2(a) illustrates how this is done. This results in the output stream in Example 2(b), sent by the web server over the TCP connection. This TCP stream is parsed by the target as follows: 1. A first HTTP response, a 302 (redirection) response. 2. A second HTTP response, a 200 response with content comprising of 19 bytes of HTML. http://www.ddj.com
3. Superfluous data. Everything beyond the end of the second response is superfluous and does not conform to the HTTP standard. So when attackers feed the target with two requests, the first being to the URL in Example 3(a) and the second to the URL in Example 3(b), then the target would believe that the first request is matched to the first response; see Example 3(c). And by this, the attacker manages to fool the target. Admittedly, this example is naive. For instance, it doesn’t take into account problems with how targets parse the TCP stream, issues with the superfluous data, problems with the data injection, and how to force caching. Security Impact With HTTP response splitting, it is possible to mount various kinds of attacks: • Cross-site scripting (XSS). Until now, it has been impossible to mount XSS attacks on sites through a redirection script when the clients use IE unless the Location header can be fully controlled. With HTTP response splitting, it is possible to mount XSS attacks even if the Location header is only partially controlled by the attacker. • Web-cache poisoning (defacement). This is a new attack. Attackers force the target (a cache server of some sort, for instance) to cache the second response in response to the second request. An example is to send a second request to “http://web.site/index.html” and force the target (cache server) to cache the second response, which is fully controlled by attackers. This is effectively a defacement of the web site, at least as experienced by other clients who use the same cache server. Of course, in addition to defacement, attackers can steal session cookies, or “fix” them to a predetermined value. • Cross-user attacks (single user, single page, temporary defacement). As a variant of the attack, attackers don’t send the second request. This seems odd at first, but the idea is that, in some cases, the target may share the same TCP connection with the server, among several users (this is the case with some cache servers). The next user to send a request to the web server through the target is served by the target with the second response, which the attackers generated. The net result is having a client of the web site being served with a resource crafted by attackers. This enables attackers to deface the site for a single page requested by a single user (a local, temporary defacement). In addition to defacement, athttp://www.ddj.com
tackers can steal session cookies and/or set them. • Hijacking pages with user-specific information. With this attack, attackers can receive the server response to user requests instead of the user. Therefore, attackers gain access to user-specific information that may be sensitive and confidential. • Browser-cache poisoning. This is a special case of web-cache poisoning. It is similar to XSS in that the attacker needs to target individual clients. However, un-
like XSS, it has a long-lasting effect because the spoofed resource remains in the browser’s cache. A Step Toward the “Perfect Hack” One of the unique properties of HTTP response-splitting’s web-cache poisoning attack is its forensics and incident-response hampering qualities. Typically, when a site is defaced or hacked, site administrators are notified by unhappy users. In classic examples of web-site defacement where a file on the web server itself has been altered
(a) <% response.sendRedirect("/by_lang.jsp?lang="+ request.getParameter("lang")); %> (b) HTTP/1.1 302 Moved Temporarily Date: Wed, 24 Dec 2003 12:53:28 GMT Location: http://10.1.1.1/by_lang.jsp?lang=English Server: WebLogic XMLX Module 8.1 SP1 Fri Jun 20 23:06:40 PDT 2003 271009 with Content-Type: text/html Set-Cookie: JSESSIONID=1pMRZOiOQzZiE6Y6iivsREg82pq9Bo1ape7h4YoHZ62RXjApqwB E!-1251019693; path=/ Connection: Close 302 Moved Temporarily This document you requested has moved temporarily.
It's now at http://10.1.1. 1/by_lang.jsp?lang=English.
Example 1: (a) Typical JSP page; (b) typical response. (a) /redir_lang.jsp?lang=foobar%0d%0aContentLength:%200%0d%0a%0d%0aHTTP/1.1%20200%20OK%0d%0aContentType:%20text/html%0d%0aContentLength:%2019%0d%0a%0d%0aShazam (b) HTTP/1.1 302 Moved Temporarily Date: Wed, 24 Dec 2003 15:26:41 GMT Location: http://10.1.1.1/by_lang.jsp?lang=foobar Content-Length: 0 HTTP/1.1 200 OK Content-Type: text/html Content-Length: 19 Shazam Server: WebLogic XMLX Module 8.1 SP1 Fri Jun 20 23:06:40 PDT 2003 271009 with Content-Type: text/html Set-Cookie: JSESSIONID=1pwxbgHwzeaIIFyaksxqsq92Z0VULcQUcAanfK7In7IyrCST9Us S!-1251019693; path=/ Connection: Close 302 Moved Temporarily This document you requested has moved temporarily.
[[[ truncated for brevity ]]]
Example 2: (a) URL-encoded CRLF sequences to terminate the current response; (b) resulting output stream. Dr. Dobb’s Journal, August 2004
51
or replaced, a quick look at the site by the administrator is all that is necessary to validate that the attack has occurred. With webcache poisoning, the altered or replaced page will not show up on the web server and may be invisible to internal employees accessing the site directly as opposed to accessing it from the external Internet (where the cache server typically provides its service). Further, in the instance of an intermediate web-cache server attack, such as an online service or ISP’s cache servers, administrators will be unable to verify the attack unless they are accessing the site via the ISP or online service. Web-cache poisoning can also result in insufficient logs. Due to the transient nature of the data held in cache servers, pages cached and the time of cache refresh or update are rarely logged, and even if the time of update is logged, the page content is not. This also impedes incident response and forensics. Finally, HTTP response splitting is reversible by attackers, who have the ability to undo the web-cache poisoning attack by submitting the injection with a payload constructed to reverse the effects. This is important on two key levels: • Standard forensics and incident-response procedures hinge on capturing attack ev-
idence; in this case, the modified HTML page. By removing the malicious page prior to the site administrator or law enforcement obtaining a copy, attackers can cover their tracks and hamper (if not disable) forensics procedures. Thus, attackers can implement the attack and quickly reverse it, leaving site administrators and law enforcement with nothing to establish that an attack even occurred. • Some web-cache servers are scheduled to automatically refresh cached pages from the web site at regular intervals. With past web-site defacement attacks, covering one’s tracks typically required hacking back into the site and deleting the manipulated files. Only very skilled intruders could change or manipulate the web-server logs to mask the breakins. An HTTP response-splitting webcache poisoning attack lets even novice crackers hack with a level of impunity once reserved for the very skilled. Recommendations It is critical that web-application developers validate input. You need to remove CRs and LFs (and all other hazardous characters) before embedding data into any HTTP response headers, particularly when setting cookies and redirecting. Another
(a) /redir_lang.jsp?lang=foobar%0d%0aContentLength:%200%0d%0a%0d%0aHTTP/1.1%20200%20OK%0d%0aContentType:%20text/html%0d%0aContentLength:%2019%0d%0a%0d%0aShazam (b) /index.html (c) The following response is then matched to the second request for /index.html [Example 3(b)]. HTTP/1.1 302 Moved Temporarily Date: Wed, 24 Dec 2003 15:26:41 GMT Location: http://10.1.1.1/by_lang.jsp?lang=foobar Content-Length: 0 And that the second request (to /index.html) is matched to the second response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 19 Shazam
Example 3: Fooling the targets by feeding them multiple requests. String Lang=request.getParameter("lang"); ... if ((Lang.indexOf('\r')==-1) && (Lang.indexOf('\n')==-1)) { /* Lang does not contain CRs or LFs, so it's safe here */ /* (at least from the perspective of */ /* HTTP Response Splitting) */ response.sendRedirect("/by_lang.jsp?lang="+Lang); } else { /* handle security error (reject the request) */ ... }
Example 4: Eliminating CR/LFs from parameters. 52
Dr. Dobb’s Journal, August 2004
option is to reject requests that contain these characters in data that is embedded into HTTP headers, as such requests are attack attempts. It is possible to use thirdparty products (like Sanctum’s AppShield, for example) to prevent CR/LF injection. Consider the (vulnerable) J2EE script example from Example 1(a), in which the code was shown to be vulnerable (at least, in BEA WebLogic 8.1 SP1 and in IBM WebSphere 5.1 servers). It can be easily fixed by making sure that no CR/LF is present in the parameter; see Example 4. This logic ensures that the lang parameter does not contain any CR or LF by searching for each and ascertaining that neither is found in the parameter value. In ASP.NET, making sure that controls do not contain specific characters amounts to using (correctly) the regular expression field validator. For example:
This ASP.NET field validator is bound to the user-input control lang, and validates this input field by ensuring that it does not contain the CR/LF characters. This is done via matching the input field to a regular expression that does not allow any CR or LF in its pattern. Of course, it is also necessary to test explicitly for the state of the validator (by checking lang.IsValid or by checking the global page state Page.IsValid). Furthermore, you should make sure to use the most up-to-date application engine and ensure that your application is accessed through a unique IP address (that is, that the same IP address is not used for another application as it is with virtual hosting). Lastly, we advise that you scan the application (before deployment, using a tool such as Sanctum’s AppScan, http:// www.sanctuminc.com/) to ensure that the application is not vulnerable to HTTP response-splitting (and other) attacks. Conclusion HTTP response splitting is a new technique that enables several new and interesting attacks. This technique only applies to applications that do not validate their input before embedding it into HTTP response headers. Avoiding this vulnerability is a simple matter of adding a short If test (or, in ASP.NET, using a field validator control). Application programmers are encouraged to understand the new threat and to apply the simple security measures in their applications and, in general, to implement security (via input validation) in their code. DDJ http://www.ddj.com
Aspect-Oriented Programming & C++ A powerful approach comes to C++ Christopher Diggins
A
spect-oriented programming (AOP) is a technique for separating and isolating crosscutting concerns into modular components called “aspects.” A crosscutting concern is a behavior that cuts across the boundaries of assigned responsibility for a given modular element. Examples of crosscutting concerns include process synchronization, location control, execution timing constraints, persistence, and failure recovery. There is also a wide range of algorithms and design patterns that are more naturally expressible using AOP. For the most part, discussions of AOP have focused on Java; see, for example, “Lightweight Aspect-Oriented Programming,” by Michael Yuan and Norman Richards (DDJ, August 2003) and “AspectOriented Programming & AspectJ,” by William Grosso (DDJ, August 2002). The main reason for this is that the majority of advancement of AOP has been done through AspectJ, a Java language extension and set of Java-based tools designed for AOP (http://www.eclipse.org/aspectj/). However, C++ programmers can also benefit from AOP because it lets you better express designs with crosscutting concerns than is possible with other, more wellknown techniques, such as object-oriented design or generic programming. Until recently, the application of AOP technology has been constrained to type-
Christopher is a freelance computer programmer and developer of Heron, a modern, general-purpose, open- source language inspired by C++, Pascal, and Java. He can be contacted at http://www .heron-language.com/. http://www.ddj.com
modifying systems. Type-modifying AOP systems modify an existing type — in most languages, this is only achievable through the use of a language preprocessor. Furthermore, most of the initial application of AOP technology has been done within the AspectJ project. This is changing with the advent of AOP preprocessors, such as AspectC++, an AOP language extension for C++ (http://www.aspectc.org/). In this article, I examine the concepts behind AOP and present techniques that demonstrate how you can implement AOP via macros. For example, Listing One is pseudocode that illustrates crosscutting concerns by introducing them into a hypothetical class, FuBar, one by one. FuBar is straightforward — it does Fu and it does Bar. But say you want to debug at some point by dumping the state of FuBar before and after each function call; see Listing Two. Once the error is found, you leave the code in for another day, with the DumpState calls conditionally compiled based on a debug switch. Now say that your project manager informs you that there is a new user-defined setting that decides whether the function Bar should actually do Bar, or whether it should do nothing. So you modify the code as in Listing Three. Now imagine that CFuBar somehow finds its way into a real-time simulation. In this scenario, Fu and Bar turn out to be bottlenecks, so you simply increase the executing thread priority as in Listing Four. One day, a tester finds a problem with your code. It turns out that the internal state goes invalid at some point, but there are thousands of calls to Fu and Bar in the software and, as such, state dumping is not appropriate. You could then identify and fix the bug by applying a class invariant check for each of the calls; see Listing Five. Then one day, the application goes multithreaded and you need to protect Fu and Bar from mutual entry. Consequently, you add a lock/unlock mechanism to maintain thread safety in FuBar (see Listing Six). Dr. Dobb’s Journal, August 2004
Despite its näiveté, this example demonstrates interleaved concerns and should be quite recognizable. The main concern is the class FuBar, whose responsibility is to do Fu and Bar (whatever they may be), while the interleaved concerns are those of debugging, real-time constraints, thread synchronicity, and the algorithmic concern of a user-defined flag to prevent you from doing “bar.” The obvious problem is that, while FuBar may work fine, it is now much more complex than before. As a result, it is too specialized to be reusable, and is becoming increasingly difficult to maintain and update. In an ideal world, we would be able to express the various concerns in separate modules or classes. This is where aspect-oriented programming comes in. The goal of AOP is to leave the original FuBar as untouched as possible, and to separate the concerns into new modules (or in our case, classes) that themselves know as little about FuBar as possible. These separated concerns are aspects, which work by providing advice on what to do at specific points of intersection in the target class. The points of intersection are called “joinpoints,” and a set of joinpoints is called a “pointcut.” The advice of an aspect can take the form of entirely new behavior, or a simple observation of execution. What C++ lacks is an operation that lets you define a new class from a target class by declaring which aspects operate on it, and at which pointcuts. One way to define new operations is through a macro, so consider this as yet undefined macro: CROSSCUT(TARGET, POINT_CUT, ASPECT)
This macro defines a new type by introducing a reflective wrapper around the TARGET class for the set of functions defined in the POINT_CUT class. This wrapper would then implement the advice found in the ASPECT. You can imagine using this imaginary CROSSCUT macro to create a new 53
FuBar with built-in state-dumping functionality: typedef CROSSCUT(FuBar, FuBarPointCut, StateDumpingAspect) DumpStateFuBar;
This statement can be read as: Define a new type named “DumpStateFuBar ” by taking an existing type, FuBar, and applying the aspect StateDumpingAspect at the functions listed in FuBarPointCut. What’s now missing is the definition of the pointcut and the aspect. A pointcut is a list of functions so, conceptually, the FuBarPointCut could be thought of as: pointcut FuBarPointCut { Fu; Bar; }
meaning that FuBarPointCut represents the set of functions named Fu or Bar. The aspect introduces crosscutting behavior by providing implementations of advice functions that are invoked at specific locations around the invocation of the joinpoints. Here are the advice functions that I will use from here on: • void OnBefore( ). Called before the joinpoint function would be called. • void OnAfter( ). The last function called if the execution of joinpoint does not throw an exception. • bool OnProceedQuery( ). If this function evaluates to False, then the entire joinpoint function is not called. • void OnFinally( ). Called after execution
of the joinpoint regardless of whether a function is called. • void OnExcept( ). Called just after OnFinally if an error occurs. For users to handle the exception, they must rethrow it and catch it from within their implementation of OnExcept( ). A logical implementation of an aspect uses a base class with virtual functions. You could then define the base class in Listing Seven for the next part of this example. So, for the DumpStateFuBar example, the state dumping aspect would look like Listing Eight. What I have done is declare the crosscutting concern in a separate class called StateDumpingAspect and used the crosscut macro to declare a new type, which intersects the concern with the original class at a specified set of locations (the pointcut). This is the concept of isolation and separation of concerns that forms the basis of AOP. All of this is fine and dandy but what gets really interesting is to combine the aspects. For that, you would define a new type such as Listing Nine. You may notice that in Listing Ten, which presents pseudocode implementations of the various new aspects, there is a new pointcut named “BarPointCut,” which represents the set of functions named “Bar.” This is because I want to apply the BarPreventingAspect to the function Bar and not Fu. This is demonstrative of the power of pointcuts in that an aspect can be written generally, and applied specifically through the use of a pointcut. What I have done here is use the crosscut macro to pile aspects up on top of FuBar hierarchically, isolating and separating a huge amount of complexity into reusable aspects. This is a much more flexible and powerful design than the more naive approach we exemplified earlier with the degenerate FuBar.
Heron
T
he method of using AOP to declare new types and the crosscut macro came about as a proof of concept for the inclusion of AOP techniques in the Heron programming language. Heron is a multiparadigm, generalpurpose language influenced heavily by Pascal and C++, which is designed to make it easier and faster to develop correct, scalable, and efficient software. The Heron specification is available online at http://www.heron-language.com/. I am currently building a Heron-to-C++ translator written in Heron. — C.D.
54
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
Implementing Crosscut Macros I now turn to an example that uses a simple implementation in C++ of a crosscut macro along with macros for declaring joinpoints. The files aspect.hpp and aspect.cpp (available electronically; see “Resource Center,” page 5) look similar to the previous pseudocode example. However, here I take a single class Point, and generate from it a new class Point_T, which introduces two crosscutting concerns — a LoggingAspect and PreventiveAspect. The aspects in this example are userdefined classes that inherit from BaseAspect and provide their own implementations for the OnXXX functions. The main difference between the aspects in this example and the previous one is that I have now introduced a parameter for each function of type FxnDesc— a function descriptor containing important information about the joinpoint that gets passed to the aspect when a joinpoint is invoked. FxnDesc contains the function name, the class name, parameter names, and names of the parameter types as strings. The pointcuts here are defined using the macros DEF_POINTCUT(CLASS_NAME) and END_POINTCUT. It is useful to know that the pointcut macros generate a new parameterized class with the name of CLASS_NAME, which inherits from two bases that are passed as parameters. This is in fact what the CROSSCUT macro uses to define a new class.
Within the pointcut definition is a list of joinpoint definitions. I have several different macros that depend on a specific kind of function signature where the joinpoint occurs. For instance, SET_PROCJOINPOINT2 is used when the joinpoint has a void return type and accepts two parameters. Each of the joinpoint macros generate a function that calls the member functions of an aspect with a properly initialized FxnDesc struct, and then calls the joinpoint itself (assuming OnProceedQuery returns True). The crosscut macro uses the generated pointcut class as a new class that inherits from both the target and from the aspect. Each of the joinpoint functions is overridden and given a new implementation that calls the inherited advice from the aspect around the call to the original implementation of the target class. Why Is This Crosscut Macro So Special? Showing a few examples does not fully do the crosscutting macro justice. With practice and a little imagination, more and more opportunities to use crosscutting to separate concerns become apparent. It is important to note that crosscutting is about reducing complexity, the advantage of which only starts to become apparent in real nontrivial applications of the technique. This is because complexity scales so poorly and is one of the major obstacles to large-scale software development.
Future Directions There is clearly much work left to be done with the code provided here to make it more robust and efficient. For instance, there may be better implementations of the outlined AOP techniques that uses templates. Another possible feature that would make the crosscutting macro much more powerful would be for it to allow observation and modification of the argument values by the advice functions. Hopefully, others will be motivated to refine the work presented here to make it more robust and fully usable in industrialstrength C++ libraries. Unfortunately, the crosscut macro can only do so much within the constraints of C++. For instance, you can’t use it to simultaneously crosscut multiple classes. C++ also lacks a facility to express pointcuts in a more general manner. These are two of the most glaring omissions compared to tools like AspectC++ and AspectJ but does not change the fact that the crosscutting macro, even in its current stage of relative infancy, can be a useful tool in the C++ programmer’s arsenal. Acknowledgments Thanks to Kris Unger, Muthana Kubba, Daniel Zimmerman, and Matthew Wilson for their proofreading and input on this article, and Cristina Lopes and Walter Hursch whose work showed me the importance of AOP.
Listing One
Listing Four
class FuBar { Fu() { // do fu } Bar() { // do bar } }
class FuBar { Fu() { DumpState(); SetThreadPriorityCritical(); // do fu RestoreThreadPriority(); DumpState(); } Bar() { DumpState(); SetThreadPriorityCritical(); if (!prevent_bar_flag) { // do bar } RestoreThreadPriority(); DumpState(); } }
Listing Two class FuBar { Fu() { DumpState(); // do fu DumpState(); } Bar() { DumpState(); // do bar DumpState(); } }
Listing Three class FuBar { Fu() { DumpState(); // do fu DumpState(); } Bar() { DumpState(); if (!prevent_bar_flag) { // do bar } DumpState(); } }
DDJ
Listing Five class FuBar { Fu() { DumpState(); SetThreadPriorityCritical(); // do fu RestoreThreadPriority(); DumpState(); } Bar() { DumpState(); SetThreadPriorityCritical(); if (!prevent_bar_flag) { // do bar } RestoreThreadPriority(); DumpState(); } }
(continued on page 56) http://www.ddj.com
Dr. Dobb’s Journal, August 2004
55
(continued from page 55) Listing Six class FuBar { Fu() { Lock(); TestInvariant(); DumpState(); SetThreadPriorityCritical(); // do fu RestoreThreadPriority(); DumpState(); TestInvariant(); Unlock(); } Bar() { Lock(); TestInvariant(); DumpState(); SetThreadPriorityCritical(); if (!prevent_bar_flag) { // do bar } RestoreThreadPriority(); DumpState(); TestInvariant(); Unlock(); } }
Listing Seven class BaseAspect { virtual void OnBefore() { /* do nothing */ }; virtual void OnAfter() { /* do nothing */ }; virtual bool OnProceedQuery() { return true; }; virtual void OnException() { throw; }; virtual void OnFinally() { /* do nothing */ }; };
Listing Eight class StateDumpingAspect : public BaseAspect { virtual void OnBefore() { DumpState(); } virtual void OnAfter() { DumpState(); } };
Listing Nine typedef CROSSCUT( CROSSCUT( CROSSCUT( CROSSCUT( CROSSCUT(FuBar, FuBarPointCut, StateDumpingAspect), BarPointCut, BarPreventingAspect), FuBarPointCut, RealTimeAspect), FuBarPointCut, InvariantAspect), FuBarPointCut, SynchronizeAspect) NewFuBar;
Listing Ten class BarPreventingAspect : public BaseAspect { virtual bool OnProceedQuery() { return (!global_prevent_bar_flag); } }; class RealTimeAspect : public BaseAspect { virtual void OnBefore() { SetThreadPriorityCritical(); } virtual void OnAfter() { RestoreThreadPriority(); } }; class InvariantAspect : public BaseAspect { virtual void OnBefore() { TestInvariant(); } virtual void OnAfter() { TestInvariant(); } }; class SynchronizeAspect : public BaseAspect { virtual void OnBefore() Lock(); } virtual void OnAfter() Unlock(); } };
DDJ
56
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
WINDOWS/.NET DEVELOPER
Building a Callout Control A comic-style balloon Callout control Thiadmer Riemersma
controls (buttons, check boxes) inside a callout. A callout is similar to a tooltip in the sense that it adapts its size and position to its contents. A callout is not a message box, either, because it does not enter a modal loop; therefore, it is better suited to display informational messages without disrupting the user’s current activity. The tail of the callout, pointing to
the object or icon/button that the message applies to, lets you write shorter messages in many circumstances. I designed the callout control to be just like other standard controls — you create it with CreateWindow( ) and you configure the control by sending messages to it. The control supports standard messages such as WM_SETTEXT, WM_SETICON,
W
hen you need to display a message that refers to another object, window, or user- interface element, a “callout” with an arrow to its target or a comic-style balloon are a better option than a message box. With this in mind, I designed a balloon-style Windows control that is sufficiently configurable to suit many purposes. I call this control a callout rather than a balloon to reduce the chance of confusion with balloon tooltips that recent versions of Microsoft Windows provide. Figure 1 shows a few of the layouts that the callout control supports. A callout is not a tooltip — it is not attached to other controls (and it does not subclass other controls). You may have multiple callouts visible at a time, as well. As an added bonus, you can add other
Thiadmer develops multimedia and system software for his company, CompuPhase, based in the Netherlands. He can be contacted at http://www.compuphase.com/. http://www.ddj.com
Figure 1: Example layouts of the callout control.
Dr. Dobb’s Journal Special Windows/.NET Supplement, August 2004
S1
and WM_SETFONT, plus a set of controlspecific messages. For conformity, many style attributes, such as the border width and the radius of the balloon corners, are set with messages, too, rather than with style bits in the CreateWindow( ) call. There are only two basic styles for the callout tail: a triangular pointer or a sequence of three aligned ellipses. More variations are obtained by changing the dimensions of the tail or other attributes.
Using the Callout Control The callout control reformats and repositions itself at any time that its contents or style changes. In typical usage, you set the text (and other contents) and the style of the callout while it is hidden, then show it. The source code for the callout control can be compiled to a separate DLL. In that case, all that is required to create a callout is to load the DLL via LoadLibrary( ) and call CreateWindow( ) with the class name Callout. The DLL registers this (glob-
#include "callout.h" LoadLibrary("callout.dll"); HWND hwndCallout = CreateWindow("Callout", "Guess what this icon does...", WS_POPUP, 0, 0, 0, 0, NULL, 0, hInstance, NULL); SendMessage(hwndCallout, CM_SETANCHOR, 0, MAKELONG(100, 200)); ShowWindow(hwndCallout, SW_SHOW);
Example 1: Sending messages. #include "callout.h" LoadLibrary("callout.dll"); HWND hwndCallout = CreateWindow(CALLOUT_CLASS, "Guess what this icon does...", WS_POPUP, 0, 0, 0, 0, NULL, 0, hInstance, NULL); Callout_SetAnchor(hwndCallout, 100, 200); Callout_SetBorder(hwnd, 2, TRUE); ShowWindow(hwndCallout, SW_SHOW);
Example 2: Using wrapper macros.
S2
al) class name automatically when it is loaded. If, instead, you compile the source code for static linking to your application, the application must call CalloutInit( ) before it creates its first callout control. If so desired, the application can also call CalloutCleanup( ) at exit point to unregister the Callout class. The window style used in CreateWindow( ) can be WS_CHILD if you want the callout to be snapped inside its parent window, but usually WS_POPUP is more appropriate. Style bits like those for a border and a caption are redundant, as the callout control removes them on creation. The window position and size parameters are also ignored; the callout control positions and sizes itself based on the text and the coordinate pair that the “tail” of the callout points to. This coordinate pair is called the “anchor” point, and you can set it by sending a message to the control. The callout chooses an appropriate position based on its attributes and the anchor point. Instead of sending messages, as in Example 1, you may also use wrapper macros as presented in Example 2. The callout control recalculates its shape and position and repaints itself every time that its style or contents change, with two exceptions. First, if the control is “hidden,” the routine skips a few steps in the
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
http://www.ddj.com
calculation/repaint sequence. Second, nearly all custom messages that change the attributes of the control take a “repaint” parameter, and if this parameter is zero, the recalculation and repainting steps are suppressed (this parameter was inspired from the WM_SETFONT message). I think that it is best to create the callout control as hidden and set all attributes before showing it. If you have a sequence of attributes to set, you can avoid redundant calculations by setting the repaint parameter to True for the last attribute setting and to False for its predecessors. Table 1 gives an overview of the standard and custom messages that you can send to the callout control to change its appearance or behavior. There are wrapper macros for each of these messages. Figures 2 and 3 may also be illustrative in describing many of the custom messages. The control hides itself when users click in the callout with the left mouse button or after a timeout, which you must have set earlier. You can change this behavior by intercepting the WM_NOTIFY message that the control sends to its parent or owner window. The callout control sends a notification for the events NM_CLICK and NM_RCLICK, plus the custom event CN_POP, for when it is about to be automatically hidden. The default action of the
http://www.ddj.com
callout control is to hide itself — it does not destroy itself. To add additional controls in the callout, you must first reserve room in the callout for the controls. The callout assumes that you will place these controls below the text. That is, you reserve vertical space at the bottom of the text box area. The widest or right-most control also
sets the minimum width that the text box should have. After setting all the other styles, your program can query the balloon text box rectangle and position its controls relative to that rectangle; see Example 3. The callout control already handles the coloring for the added controls (it intercepts the WM_CTLCOLORxxx messages), but it does not set the font for the
#include "callout.h" LoadLibrary("callout.dll"); HWND hwndCallout = CreateWindow(CALLOUT_CLASS, "Everything is gone;\n" "Your life's work has been destroyed.\n" "Squeeze trigger?", /* David Carlson */ WS_POPUP, 0, 0, 0, 0, NULL, 0, hInstance, NULL); Callout_SetMinWidth(hwndCallout, 180, FALSE); Callout_SetExtraHeight(hwndCallout, 44, FALSE); LPRECT rc = Callout_GetRect(hwndCallout); HWND buttons[3]; buttons[0] = CreateWindow("Button", "Do not ask me this again", WS_CHILD | WS_VISIBLE | BS_AUTOCHECKBOX, rc->left, rc->bottom - 16, 180, 16, hwndCallout, 0, hInstance, NULL); buttons[1] = CreateWindow("Button", "Yes", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON, rc->left, rc->bottom - 38, 70, 20, hwndCallout, 0, hInstance, NULL); buttons[2] = CreateWindow("Button", "No", WS_CHILD | WS_VISIBLE | BS_PUSHBUTTON, rc->left + 110, rc->bottom - 38, 70, 20, hwndCallout, 0, hInstance, NULL); hfont = (HFONT)SendMessage(hwnd, WM_GETFONT, 0, 0); for (i = 0; i < 3; i++) SendMessage(buttons[i], WM_SETFONT, (WPARAM)hfont, TRUE);
Example 3: Positioning controls relative to a rectangle.
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
S3
controls. Typically, you will want to use a different font than the System font for the embedded controls; Example 3 copies the font for the balloon control itself to the added buttons. Implementation Details Figures 2 and 3 show the anatomy of a callout control, where the dimensions of various attributes are controlled by the messages in Table 1. The size of the text box is calculated from the text, with the minimum width and the extra height for
embedded controls taken into account. The icon area (Figure 3) is only present when you set an icon. Some settings are “hard”; if you set the border thickness to be four pixels, the border will be four pixels thick— no more, no less. Other settings, such as the tail slant angle, are “soft”: the callout control may change these attributes if it does not fit in the display, otherwise. As you can see in Figure 2, the tail does not necessarily “touch” the anchor point: There is an optional vertical offset of the
tail to the anchor. The offset is useful if you want to avoid that the tail overlaps part of the control that it points to. The default value for the tail offset is zero, though. In the first step in calculating the size and position of the callout, the control calculates the size of the text box using the DrawText( ) function. It attempts to format the text so that the width of the text box is at most four times its height — to create aesthetically pleasing text boxes. The way that it does this is to first call
Radius
Icon Area
Tail Width
Textbox
Tail Join Tail Height
Tail Offset
Balloon
Tail Slan t Tail Anchor Point
Figure 2: Anatomy of a callout, dimensions and attributes. Message WM_GETFONT WM_SETFONT WM_GETICON WM_SETICON CM_GETANCHOR CM_SETANCHOR CM_GETBACKCOLOR CM_SETBACKCOLOR CM_GETBORDER CM_SETBORDER CM_GETDRAWTEXTPROC CM_SETDRAWTEXTPROC CM_GETEXTRAHEIGHT CM_SETEXTRAHEIGHT CM_GETMINWIDTH CM_SETMINWIDTH CM_GETRADIUS CM_SETRADIUS CM_GETRECT CM_GETTAILHEIGHT CM_SETTAILHEIGHT CM_GETTAILJOIN CM_SETTAILJOIN CM_GETTAILOFFSET CM_SETTAILOFFSET CM_GETTAILSLANT CM_SETTAILSLANT CM_GETTAILSTYLE CM_SETTAILSTYLE CM_GETTAILWIDTH CM_SETTAILWIDTH CM_GETTEXTCOLOR CM_SETTEXTCOLOR CM_GETTIMEOUT CM_SETTIMEOUT CM_GETVERTALIGN CM_SETVERTALIGN
Figure 3: The callout components.
Wrapper macro
Description Font handle to use for the text Icon handle (optional)
Callout_GetAnchor(hwnd) Callout_SetAnchor(hwnd, x, y) Callout_GetBackColor(hwnd) Callout_SetBackColor(hwnd, Color, Repaint) Callout_GetBorder(hwnd) Callout_SetBorder(hwnd, Width, Repaint) Callout_GetDrawTextProc(hwnd) Callout_SetDrawTextProc(hwnd, Proc, Repaint) Callout_GetExtraHeight(hwnd) Callout_SetExtraHeight(hwnd, Height, Repaint) Callout_GetMinWidth(hwnd) Callout_SetMinWidth(hwnd, Width, Repaint) Callout_GetRadius(hwnd) Callout_SetRadius(hwnd, Radius, Repaint) Callout_GetRect(hwnd) Callout_GetTailHeight(hwnd) Callout_SetTailHeight(hwnd, Height, Repaint) Callout_GetTailJoin(hwnd) Callout_SetTailJoin(hwnd, Join, Repaint) Callout_GetTailOffset(hwnd) Callout_SetTailOffset(hwnd, Offset, Repaint) Callout_GetTailSlant(hwnd) Callout_SetTailSlant(hwnd, Slant, Repaint) Callout_GetTailStyle(hwnd) Callout_SetTailStyle(hwnd, Style, Repaint) Callout_GetTailWidth(hwnd) Callout_SetTailWidth(hwnd, Width, Repaint) Callout_GetTextColor(hwnd) Callout_SetTextColor(hwnd, Color, Repaint) Callout_GetTimeout(hwnd) Callout_SetTimeout(hwnd, Timeout) Callout_GetVertAlign(hwnd) Callout_SetVertAlign(hwnd, Alignment, Repaint)
Position of anchor in client coordinates Color of interior of balloon Border thickness Text drawing function to replace DrawText() Extra height of balloon text box Minimum width of balloon text box Radius of rectangle rounding Position and size of balloon text box Height of tail in pixels Horizontal position where tail joins balloon; a percentage of the balloon width Vertical offset of tail to anchor point Tail slant (tangent of the angle); a negative value slants to the left Tail style: CS_SPEAK or CS_THINK Width of the tail where it joins the balloon Color of the text and border of the callout Timeout in milliseconds; 0 = no timeout Alignment of callout relative to the anchor point: CS_ALIGNABOVE or CS_ALIGNBELOW
Table 1: An overview of the standard and custom messages that you can send to the callout control. S4
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
http://www.ddj.com
DrawText( ) with the text string and only the flag DT_CALCRECT set. If the calculated rectangle is too wide, the callout control adjusts the rectangle’s width based on the area (width times height) of the rectangle and makes a second call to DrawText( ), now with the flag DT_WORDBREAK set in addition to DT_CALCRECT. The DrawText( ) function only allows for minimal formatting (word wrapping and line breaks). If you wish to plug-in another text-formatting engine in the callout control (HTML formatting seems a popular request), you can set it by sending a CM_SETDRAWTEXTPROC message to the control. The only requirement is
that the replacement is compatible with DrawText( ) at the call level, including the functioning of the DT_CALCRECT and DT_WORDBREAK flags. Once you know the width and the height of the text portion, you add space for an optional icon and margins for the balloon. Then the callout control goes looking for a suitable location for the balloon part of the callout. When you position a callout, you set the coordinates of the anchor point, the point that the stem or tail of the callout points to. The control initially puts the balloon part at a fixed offset from the anchor point, depending on the shape and size of the tail.
Optimizing the Text Box Shape
I
f you want to reformat a text box, for which we already have an initial width and height, so that it is at most four times as wide as it is high, you only need to combine the equation width×height= area where the area is kept constant, with the upper bound for the width: width≤4×height. Simple algebra then gives you the criterion: width≤√4×area. Calculating the Text Box with DrawText() I found the description of the DT_CALCRECT flag in the documentation for DrawText( ) confusing and so I did a few experiments to verify the behavior. Fortunately, the DrawText( ) function is quite robust and more versatile than the documentation suggests. If the DT_CALCRECT is present in the flags field of the DrawText( ) function, the function changes the right and the bottom fields of the formatting rectangle, but it does not touch the left and top fields. The value of the right field becomes the left field plus the width of the calculated bounding box of the text after formatting; the bottom field becomes the top field plus the height of the bounding box. Hence, with DT_CALCRECT set, the rectangle parameter is half input and half output. When you include DT_SINGLELINE in the flags field, the DrawText( ) function replaces all carriage return (CR) and line feed (LF) characters to a space. There is no special handling of pairs of CR and LF characters; each such character is replaced by a space character. When neither the flags DT_SINGLELINE and DT_WORDBREAK are set, DrawText( ) adheres to line-breaking characters in the input string, but it does not otherwise add line breaks. Any occurrence of the CR or LF characters is a
http://www.ddj.com
line break; a combination like CR-LF or LF-CR counts as a single line break, but CR-CR or LF-LF is a double line break. A line break does not add to the width of the bounding text box, but it does add to its height. The DT_WORDBREAK flag makes DrawText( ) use the value of the right field of the formatting rectangle parameter as the right margin and it tries to format the text inside this margin. DrawText( ) only breaks lines between the words, there is no hyphenation algorithm hidden in the function. If the input string contains a word that is wider than the formatting rectangle, DrawText( ) extends the formatting rectangle to the width of that word. In a typical situation, after word breaking, the text fits a narrower rectangle than the input formatting rectangle. When DT_WORDBREAK is combined with DT_CALCRECT, DrawText( ) stores the updated right margin back in the formatting rectangle. So when you use DT_CALCRECT together with DT_WORDBREAK, the resulting formatting rectangle may both be wider (in the presence of a long, nonbreakable word) or narrower than the input rectangle. I experienced DrawText( ) entering an infinite loop when the left field of the formatting rectangle was set to a value greater than the right field on input, and the DT_WORDBREAK flag was set. In this case, the input width of the formatting rectangle is negative, so this is arguably an input error and the behavior of DrawText( ) should be expected to be undefined. DrawText( ) does not have any difficulty when the left and right fields are set to the same value (input width is zero). I verified this DrawText( ) behavior on Windows 98/2000, by the way. —T.R.
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
S5
However, the balloon part is always snapped inside the bounds of the display, and the callout control also tries to avoid overlapping other callout controls. When there are multiple monitors attached to a system, you can no longer assume that (0,0) points to the upper left corner of the display and use GetSystemMetrics( ) to get the working area of the display. Starting with Windows 98, Microsoft added multimonitor support functions to the Windows API. In a typical configuration, the display areas of the monitors are combined into a virtual screen (with a width that is the sum of the widths of the monitor areas). To make it easier to port existing software to become multimonitor aware, Microsoft also provides the file MULTIMON.H. Using MULTIMON.H has as an added bonus that it automatically provides substitute (stub) functions for those versions of Microsoft Windows that do not support multiple monitors. To add multimonitor support, you need to include MULTIMON.H in your source files. In one of these files, you should declare the macro COMPILE_MULTIMON_STUBS before including the file so that the substitute functions get defined. From that point on, the multimonitor functions like MonitorFromPoint( ) and GetMonitorInfo( ) are available. As a side note, the function GetMonitorInfo( ) returns a “monitor rectangle” and a “work rectangle.” The monitor rectangle describes the coordinates of the display area of the monitor in the larger virtual screen; the work rectangle is the monitor rectangle minus any task bar on that monitor. To avoid overlapping sibling controls, the callout control uses the Windows function FindWindowEx( ) to walk through all windows of the Callout class. All example code for FindWindowEx( ) that I have come across uses FindWindowEx( ) to get a handle on child windows, but the function can also locate any nonchild (popup, overlapped) window. In experimenting with FindWindowEx( ), I noticed that the order in which FindWindowEx( ) returns handles to the callout windows was consistently the inverse of their creation order, but the callout control does not rely on any order. When the callout control finds that in its preferred situation it overlaps another callout control, it moves itself to the left or the right, depending on which direction gives the shortest displacement. If that move causes another conflict, the callout toggles the vertical alignment (from above to below the anchor point or vice versa) and tries again, perhaps after another horizontal movement. To keep the tail pointing at the anchor point S6
after moving horizontally, the callout first changes the position where the tail joins the balloon. If that is not enough, the callout changes the slant of the tail. It is quite possible that all these attempts to find a good fit fail, especially because of the additional requirement that the callout control must fit inside the display area of a single monitor. So in some (crowded) situations, a callout may overlap another callout, but at least it tries to avoid that. A final matter is whether the callout is a child window or a popup window. A child window is clipped inside the client area of its parent, a popup window can extend beyond the parent window’s rectangle. For the positioning algorithm of the callout control, it therefore becomes important to check whether the control has a parent window— a popup window has an owner window but (usually) not a parent. Here, the Windows API becomes confusing. In spite of its name, the GetParent( ) function may return the owner window instead of the parent window. According to the oft-cited Knowledge Base article Q84190, there is not a good way to find the real parent of a window in all circumstances. But actually, that article is out of date because Windows 98 introduced the function GetAncestor( ), which does not mix up owner and parent windows. I was not prepared to give up Windows 95 yet, so the callout control implements its own fix for GetParent( ), called GetChildParent( ), that checks for the “child window”-style flags before calling GetParent( ). After the size and position of the balloon part of the callout are determined, the control can finally calculate the size and position of the complete window, taking the callout tail into consideration. To create a nonrectangular window, I use window regions. Creating such a region is fairly simple because Windows provides both the functions to create basic regions and the functions to combine regions (union, intersection, and others). When I started implementing the control, I thought that the most difficult part would be to draw the outline of the callout exactly along the edge of the region. Fortunately, it turns out that the Windows function FrameRgn( ) does exactly that in one simple call. Painting windows with a window region is simplest if the window does not have any nonclient areas, such as a border or caption. When you create a callout window, the callout control checks for these flags and removes them if set. If you wish to create a callout window without a border, you must set the border width to zero by sending a message, rather than by adjusting the window style.
Conclusion Callout.c (available electronically; see “Resource Center,” page 5) is the implementation of the callout control. Custom messages, wrapper macros, and other constants are in the include file CALLOUT.H. There is a BUILD.BAT batch file in the source archive to compile it into a DLL. The batch file has command lines for Borland and Microsoft C compilers; you may have to edit it to uncomment the lines for your compiler before running it. The file MULTIMON.H is part of the Win32 Platform SDK, but you can also get it (in a smaller download) from the archives of Microsoft Systems Journal. The June 1997 issue carries a detailed article on multimonitor support, which you can still read at http://www.microsoft.com/ msj/0697/monitor/monitor.aspx. The two Haiku error messages at the bottom in Figure 1 are by David Carlson and David Dixon, who wrote them for the Haiku error messages contest for Salon .com (see http://archive.salon.com/21st/ chal/1998/02/10chal2.html). Window regions are not the only method that allows for nonrectangular windows. Windows 2000 introduced “layered windows” with a transparent color key. Layered windows are easier to use than window regions, but they require Windows 2000 or XP. Apart from constructing a region from primitive shapes, a tool like RegionCreator (found at http://www.flipcode .com/) makes a region from a bitmap with a specific color set as transparent. Despite the degree of customization that the callout control already provides, I foresee that most future improvements will focus on adding more visual styles and increasing the flexibility of the control. I am confident that the current implementation is useful as it is, and that it will be easily extendible if it needs to. DDJ
More .NET On DDJ.com Build Events and Visual Studio IDE Tools If you need the greatest control over the build process performed by the VS.NET IDE, then C++ has the edge over C# and VB.NET. Lightweight Collections From Web Service Methods The DataSet object’s rich feature set makes it a popular data type in pure .NET projects, but if your web service needs to be truly interoperable, avoid DataSets and instead rely on a collection. Available at http://www.ddj.com/topics/dotnet/.
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
http://www.ddj.com
WINDOWS/.NET DEVELOPER
Tracing Program Execution & NUnit Instrumenting and listening to your code Paul Kimmel
I
like to watch The Discovery Channel television show American Chopper about the Teutul family (http://www .orangecountychoppers.com/) who build some amazing motorcycles. As well as being entertaining, the Teutuls build these very cool bikes in a short time from about 90 percent stock parts. According to the show, they turn around custom choppers in about a week. How do they do it and what can we learn from them? The key to the success of Orange County Choppers is that they do use stock parts and most of the pieces and some of the labor are supplied by specialists. For instance, Paul Teutul, Jr. may start with a stock frame, or request a custom frame with a some pretty basic specifications. On the “Fire Bike” episode, for example, they added a custom carburetor that looked like a fire hydrant. The key elements here are that a carburetor is a well-defined pattern and the engineers that built it are experts at building carburetors. Collectively, the Teutuls use stock parts and improvise selectively for the extra cool factor. In our business, stock tools are existing components and frameworks. The more we use stock components, frameworks, and rely on experts, the less time we have to expend on meeting deadlines. In addition, this surplus of energy and time can be exploited to add the fit-and-finish
Paul is chief architect for SoftConcepts and author of Advanced C# Programming (McGraw-Hill, 2002), among other books. He can be contacted at pkimmel@ softconcepts.com. http://www.ddj.com
that exceeds expectations and excites users. To this end, in this article I examine the open-source NUnit tool and the .NET Framework’s TraceListeners that give you a means of easily and professionally eliminating bugs from code. Instrument Code As You Write It To instrument code means to add diagnostics code that lets you monitor and diagnose the code as it runs. One form of instrumenting code is to add Trace statements that tell you what the executing code is really doing. Granted, tracing has been around a while, but only recently has it been included in the broader concept referred to as “instrumenting” code. Adding Trace statements as you progress is a lot easier and more practical than adding them after the solution code has been written. Instrumenting as you go is better because you know more about the assumptions you are making when you are implementing the solution. The basic idea is simple: When you write a method or property, add a Trace statement that indicates where the instruction pointer is at and what’s going on. This is easy to do. If you are programming in C#, add a using statement that refers to the System.Diagnostics namespace and call the static method Trace .WriteLine statement, passing a string containing some useful text. The Trace class maintains a static collection of TraceListeners and one Trace statement multicasts to every listener in the collection. This implies — and is the case, in fact — that you can create a custom TraceListener. A default TraceListener sends information to the Output window in VS.NET, which is always in the listener’s collection, but you need to be running VS.NET to see these messages. In addition to supporting custom TraceListeners, the .NET Framework facilitates postdeployment turning tracing on/off. This means, you can instrument your code
with Trace statements, leave them in when you deploy your code, and turn them back on in the field — modifying the application or machine’s external XML config file. Unit Test Frequently The next thing you need is a process for testing chunks of code. Testing should occur early and often, and it is more practical and prudent to test in iterations, especially since NUnit makes it so easy. Returning to the custom chopper analogy, I guarantee you that the company supplying motors to the Teutuls turns them over and run them a bit before they ever get on a bike. This is because the motor is an independent, testable entity, a component if you will. NUnit (http://www.nunit.org/) is built with .NET. NUnit 2.1 is a unit-testing framework for .NET languages. Although originally modeled after JUnit, NUnit is written in C# and takes advantage of numerous .NET language features. To test code, all you need do is download and install NUnit. Then create a class library and add some attributes to classes containing test code. .NET custom attributes defined by the nunit.framework namespace are used to tag classes as test fixtures and methods for initialization, deinitialization, and as tests. For example, Listing One is the the canonical HelloWorld.exe application, and Listing Two is a class library that implements tests. In Listing One, a sample class keeps down the noise level. Greetings shows you what you need to see, the Trace.WriteLine statement. (I use Trace statements in methods with this low level of complexity if the method is important to the solution domain.) In Listing Two, I add a reference to the library containing the Greetings class and a using statement introducing its encompassing namespace. Next, I add a reference to the nunit.framework.dll assembly and its namespace. After that, you just need a class tagged with the TestFixtureAttribute — dropping the attribute suffix
Dr. Dobb’s Journal Special Windows/.NET Supplement, August 2004
S7
by convention — and the TestAttribute on public methods that return void and take no arguments. If you load the test library in NUnit, it takes care of the rest. A green for “pass” and red for “fail” (see Figure 1) removes all ambiguity from the testing process.
Figure 1: The test passed.
Figure 2: Listening for Trace messages in NUnit.
S8
NUnit was written by .NET Framework experts. If you look at the NUnit source, you see that they knew how to dynamically create AppDomains and load assemblies into these domains. Why is a dynamic AppDomain important? What the dynamic AppDomain lets NUnit do is to leave NUnit open, while permitting you to compile, test, modify, recompile, and retest code without ever shutting down. You can do this because NUnit shadow copies your assemblies, loads them into a dynamic domain, and uses a file watcher to see if you change them. If you do change your assemblies, then NUnit dumps the dynamic AppDomain, recopies the files, creates a new AppDomain, and is ready to go again. The collective result is that NUnit facilitates testing while reducing the amount of scaffolding you have to write to run the tests and eliminates the time between testing, modifying, and retesting code. You focus on the code to solve the problem and tests, not the testing utility itself. Listening to Your Code Once you have instrumented your code with Trace statements and written NUnit tests, wouldn’t it be nice to be able to see the output from those Trace statements?
Remember that the Trace class writes to every listener in the Trace.Listeners collection. All you need to do is implement a custom TraceListener and NUnit tells you if a test passed or failed, and shows you what’s going on behind the scenes. Listing Three shows how to implement a sufficient TraceListener for NUnit. (The new code is shown in bold font.) Inside the file containing the TestFixture, I added a custom TraceListener. The custom listener overrides the Write and WriteLine methods and sends the message to the Console. NUnit redirects standard output (the Console) to NUnit’s Standard Out tab (Figure 2). To finish up, you stuff the listener in the Trace.Listeners collection. Now that you have NUnit listening for Trace messages, you can run the tests and all of your Trace statements are written to the Standard Out tab. When the tests are all green, things are going okay. Conclusion If you instrument your code with Trace statements, define a custom listener, and use NUnit tests, you have some powerful but easy-to-use code working on your behalf. Making this a regular part of your software development process goes a long way in speeding up a better end result.
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
DDJ
http://www.ddj.com
Listing One using System; using System.Diagnostics; namespace TestMe { public class Greetings { public static string GetText() { Trace.WriteLine("Greetings.GetText called"); return "Hello, World"; } } }
Listing Two using NUnit.Framework; using TestMe; namespace Test { [TestFixture()] public class MyTests { [SetUp()] public void Init() { // pre-test preparation here } [TearDown()] public void Deinit() { // post-test clean up } [Test()] public void GreetingsTest() { Assertion.AssertEquals("Invalid text returned", "Hello, World", Greetings.GetText()); } } }
Listing Three using System; using System.Diagnostics; using NUnit.Framework; using TestMe; namespace Test { public class Listener : TraceListener { public override void Write(string message) { Console.Write(message); } public override void WriteLine(string message) { Console.WriteLine(message); } } [TestFixture()] public class MyTests { private static Listener listener = new Listener(); [SetUp()] public void Init() { if( !Trace.Listeners.Contains(listener)) Trace.Listeners.Add(listener); } [TearDown()] public void Deinit() { Trace.Listeners.Remove(listener); } [Test()] public void GreetingsTest() { Assertion.AssertEquals("Invalid text returned", "Hello, World", Greetings.GetText()); } } }
DDJ
http://www.ddj.com
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
S9
WINDOWS/.NET DEVELOPER
Synchronization Domains How to use .NET synchronization domains Richard Grimes
A
blocked thread is a thread that does no work. Moreover, a blocked thread could have exclusive access to a resource and another thread could block waiting to get exclusive access to the same resource. Deadlocks caused by situations like this are notoriously difficult to reproduce and debug. It’s far better to design code so that deadlocks do not occur. In this article, I focus on one facility .NET provides that does this — synchronization domains. COM and Synchronization Domains Synchronization domains are not new to Windows. COM provided synchronization through apartments. Specifically, the single-threaded apartment (STA) contained a single thread used to access all the objects in the apartment. But the STA is more than just a mechanism to restrict access to a single thread — it is implemented to use a Windows message queue to store calls into the STA so that the requests are synchronized with Windows messages. This means that STA objects running on a GUI thread won’t freeze the user interface and that the message queue can be used to provide safe reentrance so that, while an STA object is making a call out of the apartment, another object can make a call into the apartment. COM users could utilize STA synchronization to manage multiple objects that required access to a shared resource. To do this, you make all of those objects run in the same STA, which by definition Richard is the author of Programming with Managed Extensions for Microsoft Visual C++ .NET 2003 (Microsoft Press, 2003). He can be contacted at richard@ richardgrimes.com. S10
means that a single thread handles calls to all of the objects. Since there is only one thread, this means that only one thread at any time can have access to the resource. To use STA synchronization, a COM class has to be marked in the registry with the string value ThreadingModel of Apartment. Inevitably, this is an issue because information about a COM class is stored in a separate location to the class. However, the COM object does not take part in synchronization, it is the COM system’s responsibility to ensure that the object is created in an STA, and creates an STA if one does not exist. Furthermore, if an STA object creates another STA object, the COM system ensures that both objects run in the same (single-threaded) apartment. This synchronization was further extended with Transaction Server (MTS) and COM+, which has a facility called an “activity.” In effect, an activity is a logical thread that extends to more than one process or machines. The COM+ system needs to determine if a class should run in an activity and to do this it consults the COM+ catalog for the class’s metadata. COM+ metadata is more expressive than COM because it identifies not only whether the class should run in an activity, but it also specifies whether objects of the class can run in an existing activity or if it needs a new activity. However, the COM+ catalog still has the same issue as COM registration: The metadata is stored in a separate location and there’s a possibility of a third party changing the metadata to something inappropriate. .NET Contexts COM+ is essentially a precursor for .NET. COM+ introduced component services and object context, which provides automatic transaction enlistment and activities but does not let you customize contexts. .NET also provides component services but, in contrast to COM+, the .NET context architecture is extensible. .NET component services are provided through context attributes on context-bound objects. A context is an execution environment that contains one or more objects that require the same component services. When
a context-bound object is created, the runtime checks the context where the activation request was made. If this context provides the services required by the new object, then the object is created in the creator’s context. If the creator’s context is not suitable, the runtime creates a new context. In general, any object accessing a contextbound object does so through a proxy object and when the proxy is created (the context-bound object is marshaled to the other context), the runtime sets up a chain of “sink” objects that add the component services of the context to a method invocation. When an object is called across a context boundary, the method invocation is converted into a message object that is passed through the chain of sinks. Component services are only applied on context-bound objects, which means that the class derives (directly or indirectly) from ContextBoundObject. This class derives from MarshalByRefObject and the two classes mean that after the object is created, it always remains in the same context and to access it from another context requires marshaling. A class developer indicates the component services that the class uses by applying an attribute to the class. This means that the metadata is part of the class. Only a developer can change the metadata or the implementation of the class that uses the component services. Class metadata is no longer the fragile data that it was with COM and COM+. Context Attributes Context attributes are more than just metadata— they are derived from a class called ContextAttribute, which means that they have the responsibility of determining whether a candidate context has the required component services. The context attribute also has the responsibility of creating the sink object that provides the component service. The runtime calls the attribute and passes the next sink in the chain so that the attribute can create its own sink object and insert it into the chain. Figure 1 is a simplified schematic of this mechanism. The sink objects are only called when a method call is made across (continued on page S13)
Dr. Dobb’s Journal Special Windows/.NET Supplement, August 2004
http://www.ddj.com
(continued from page S10) a context boundary. The proxy object looks like the object in the object context and has the responsibility of converting a method call into a message object. In effect, this message is a serialization of the method parameters and information about the method, and it is passed through the sink chain in the client context through a special channel object that makes the cross context call and then through the sink chain in the object’s context. Each sink object can handle the method call according to the component service it is providing. Sinks can read the data in the message and can provide processing before passing the message to the next sink, or may even decide to handle the message itself without calling the next sink (for example, if the sink chooses to generate an exception). The final sink in the chain builds the stack from the data in the message and uses this to call the object method. Synchronization The runtime provides the [Synchronization] attribute in the System.Runtime.Remoting.Contexts namespace to add synchronization to a context-bound object. This attribute is initialized with two values — one value determines whether the context should be reentrant, and the other value specifies how the synchronization context is created. Access to an object that has synchronization is subject to a lock. A thread attempting to access the object has to obtain the lock and, while one thread possesses the lock, no other thread can have access. A single lock for every object would be rather wasteful but, more worrisome, it presents a possibility of deadlock. Instead, .NET defines synchronization domains where objects share a single lock and only one thread at any time can possess the lock. The synchronization domain is controlled by the flag parameter of the [Synchronization] constructor. This parameter is an integer even though only four values are acceptable; these four possible values are defined as constant fields as part of the SynchronizationAttribute class. This design is not typesafe and it causes
problems if you use Managed C++. The Managed C++ compiler insists that a value used as an argument for an attribute must be evaluated at compile time, and it insists that this is the case only if the field has the special C++ metadata modifier IsConstModifier. This modifier is not used on the fields of SynchronizationAttribute and so the C++ compiler generates the “C2363” error. This problem can be solved by casting the field to Int32 before using it as a parameter to the attribute. The two most important values that can be passed to the flag parameter are REQUIRED and REQUIRES_NEW. In both cases, the object’s context must synchronize access and the values indicate whether the object can be created in an existing synchronized context or require a new context. The synchronized context is generally called a “synchronization domain.” This can contain one or more objects that share the same synchronization lock. REQUIRED is the default and indicates that the creator’s context can be used if it is a synchronized context. This means that the creator and the new object are in the same synchronization domain and share the same lock. If the creator’s context is not synchronized, a new context is created. REQUIRES_NEW indicates that a new context is always created regardless of the context of the creator. This is useful for class factory objects used to create objects. Consequently, it should not need synchronized access to the objects it creates. Unlike STA synchronization, .NET synchronization domains do not have thread affinity; that is, a synchronization domain does not restrict access to the objects it contains to a single thread. Instead, it restricts access to a single thread at one particular time. That is, once one thread has completed its work within the synchronization domain and has released the lock, another thread can obtain the lock and access the objects. Contrast this to STA synchronization where only one specific thread can access the objects throughout the lifetime of the apartment. For example, Listing One shows a base class that has a single method Run( ). This method extracts the name of the class and
Context
Context
Context Bound Object
Context Bound Object
Proxy
Stack Builder
Client-Side Sink Chain
Cross-Context Channel
Server-Side Sink Chain
Figure 1: Schematic showing calls between context-bound objects. http://www.ddj.com
prints it 100 times on the console. The console is a shared resource, which means that you should be careful when writing to it with multiple threads. Listing Two shows two classes derived from the base class and a third class that runs those two classes on separate thread pool threads. Base.Run( ) calls Thread.Sleep( ) every iteration of the for loop and this has the effect of relinquishing the time slice after the current thread has slept for the specified number of milliseconds. Since there are two threads running concurrently, it means that, when one thread relinquishes its time slice, it lets the other thread run. This means that the code in Listing Two should print a sequence of AB pairs on the console. (The sequence starts with a line of As and ends in a line of Bs because it takes a little time for a pool thread to run the method in response to the second call to QueueUserWorkItem.) To synchronize the calls to the objects, you should apply the [Synchronization] attribute with REQUIRED. This attribute is inherited and since the two test classes are derived from Base, it makes sense to add the attribute to that class. However, if you rerun the code with this change, the result is the same as before. The reason is because both objects are created in the App.Start( ) method, which is not running in a synchronized context, so the runtime creates two synchronized contexts, one for each of the objects. The runtime makes no attempt to search for a suitable context, other than checking to see if the creator’s context is suitable. To create a single synchronization domain, you need to create an object in a synchronized context and make this object create the additional object. To do this, the class A has a method that creates and returns the B object —CreateB( )— and App.Start( ) uses this method to create the B object. Listing Three shows these changes. Implementation of SynchronizationAttribute It is instructive to dig deeper into the implementation of SynchronizationAttribute, to get a better understanding of how synchronization works and to learn techniques that you can apply elsewhere. You can view the implementation either by reading the IL with ILDASM, decompiling the assembly with something like Anakrino (http:// www.saurik.com/net/exemplar/) or Reflector (http://www.aisto.com/roeder/dotnet/). Better yet, you could take a look at the synchronizeddispatch.cs source file in the Shared Source CLI (http://msdn .microsoft.com/net/sscli/). This shows that the client-side sink is created from a class called SynchronizedClientContextSink. Client-side sinks are used when an object in a synchronized context makes a call to
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
S13
an object outside of the context. In this case, the code behaves differently for objects that allow reentrance and those that don’t. In the former case, the synchronization lock has to be released to allow another thread to call into the synchronization domain. Then, when the outgoing call returns, the thread has to wait to obtain the synchronization lock. If the code is not reentrant, it means that threads cannot execute code in the domain while the outgoing call is active and so the thread does not have to release the lock. The object-side sink is called SynchronizedServerContextSink and it handles the situation when a call from outside the context is made on the object. In this example, the thread-pool thread would be executing in a different context and so the SynchronizedServerContextSink methods are invoked when this thread accesses instances of either A or B. It is important to note that both of these sink objects hold a reference to the attribute that created them. This means that the client and object contexts must be in the same application domain. The synchronization process relies on two main objects: a Queue (a first-in/firstout queue) to hold information about the method request and a Monitor to perform interthread communication. The synchronization works like this: First, a call comes into the context and the code checks to see if any requests are queued. If there are no queued method invocation requests, the current thread gains the synchronization lock by setting an internal flag to True and the current method invocation request is passed to the object. If a second request is made while another request is being handled, a check on the synchronization flag will indicate that the domain is locked and so the request is put into the queue as a WorkItem object. This second thread
is blocked. To do this, Monitor.Wait( ) is called on the WorkItem object that was put in the queue. Meanwhile, the first thread continues its work and, when it has completed, it extracts the next WorkItem from the queue and passes it to Monitor.Pulse( ), which wakes up the waiting thread. This woken thread will be the only thread running in the synchronization domain, so it automatically gets the lock. This thread can then execute its work and if any other threads try to get access to any object in the domain, those method calls are queued until the current thread has completed its work. When the last thread has completed its work, the synchronization flag is cleared and ready for the next batch of calls that may happen sometime later. There are two important points to note here: • Multiple threads can access objects in a synchronization domain, but only one thread can execute at any time and the other threads block while the corresponding invocation request is queued. • Once one thread is executing an object in the domain, all of the objects are locked and no instance members on any of the objects can be accessed by any other thread. This aspect of synchronization domains means that even if other members on the objects are thread safe, they still cannot be accessed by other threads. There is no mechanism to indicate that a member is thread safe and so should not be subject to the synchronization lock. .NET and COM+ Activities .NET still gives access to COM+ activities and, rather confusingly, it also uses an attribute called [Synchronization] in the System.EnterpriseServices namespace to indicate that an object will take part in an
Listing One
}
class Base : ContextBoundObject { // Write the class name 100 times on the console public void Run(object o) { string str = GetType().ToString(); for (int i = 0; i < 100; i++) { Console.Write(str); Thread.Sleep(20); } Console.WriteLine(); } }
Listing Two class A : Base {} class B : Base {} class App { static void Main() { App app = new App(); app.Start(); // Keep main thread alive Console.ReadLine(); } void Start() { A a = new A(); B b = new B(); ThreadPool.QueueUserWorkItem(new WaitCallback(a.Run)); ThreadPool.QueueUserWorkItem(new WaitCallback(b.Run));
S14
activity. However, this attribute is significantly different in meaning and implementation to the native synchronization attribute I have just described. [Synchronization] also takes a parameter to indicate whether the object can join an existing activity or must have its own activity: however, this time the parameter is an enumeration that makes it typesafe. COM+ objects must run in a context to be able to benefit from COM+ component services, and it does this by deriving from ServicedComponent. However, the COM+ [Synchronization] is not a context attribute and it is not .NET that applies the synchronization. Instead, when the serviced component is registered with COM+, the regsvcs tool uses reflection to read information about the serviced components in an assembly. The [Synchronization] attribute on a class merely means that the regsvcs tool should add the synchronization metadata to the class’s entry in the COM+ catalog. Again, the catalog is consulted by COM+ when it activates objects, and the COM+ runtime then creates and manages activities if they are required. Registering an assembly with the COM+ catalog adds extra overhead to your application deployment and there is runtime overhead because the object is running under COM+ (although this overhead will be minimized if the .NET code is configured to be a COM+ library). If you want to have a synchronization domain within a process, then the .NET synchronization domain should be used in preference to a COM+ activity. However, if you want to synchronize objects that use process isolation (rather than the lightweight .NET application domain isolation) or run on different machines, then you can only use a COM+ activity. DDJ
}
Listing Three [Synchronization(SynchronizationAttribute.REQUIRED)] class Base : ContextBoundObject { // rest of the code is the same as before } class A : Base { // Create a B object in the same context as the A object public B CreateB() { return new B(); } } class B : Base {} class App { static void Main() { App app = new App(); app.Start(); // Keep main thread alive Console.ReadLine(); } void Start() { A a = new A(); B b = a.CreateB(); ThreadPool.QueueUserWorkItem(new WaitCallback(a.Run)); ThreadPool.QueueUserWorkItem(new WaitCallback(b.Run)); } }
DDJ
Dr. Dobb’s Journal Windows/.NET Supplement, August 2004
http://www.ddj.com
C++ and the Perils of Double-Checked Locking: Part II Almost famous—the volatile keyword Scott Meyers and Andrei Alexandrescu
I
n the first installment of this two-part article, we examined why the Singleton pattern isn’t thread safe, and how the Double-Checked Locking Pattern addresses that problem. This month, we look at the role the volatile keyword plays in this, and why DCLP may fail on both uni- and multiprocessor architectures. The desire for specific instruction ordering makes you wonder whether the volatile keyword might be of help with multithreading in general and with DCLP in particular. Consequently, we restrict our attention to the semantics of volatile in C++ and further restrict our discussion to its impact on DCLP. Section 1.9 of the C++ Standard (see ISO/IEC 14882:1998(E)) includes this information (emphasis is ours): Scott is author of Effective C++ and consulting editor for the Addison-Wesley Effective Software Development Series. He can be contacted at http://aristeia.com/. Andrei is the author of Modern C++ Design and a columnist for the C/C++ Users Journal. He can be contacted at http:// moderncppdesign.com/. http://www.ddj.com
The observable behavior of the [C++] abstract machine is its sequence of reads and writes to volatile data and calls to library I/O functions. Accessing an object designated by a volatile lvalue, modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment.
In conjunction with our earlier observations that the Standard guarantees that all side effects will have taken place when sequence points are reached and that a sequence point occurs at the end of each C++ statement, it would seem that all we need to do to ensure correct instruction order is to volatile-qualify the appropriate data and sequence our statements carefully. Our earlier analysis shows that pInstance needs to be declared volatile, and this point is made in the papers on DCLP (see Douglas C. Schmidt et al., “Double-Checked Locking” and Douglas C. Schmidt et al., PatternOriented Software Architecture, Volume 2). However, Sherlock Holmes would certainly notice that, to ensure correct instruction order, the Singleton object itself must also be volatile. This is not noted in the original DCLP papers and that’s an important oversight. To appreciate how declaring pInstance alone volatile is insufficient, consider Example 7 (Examples 1– 6 appeared in Part I of this article; see DDJ, July 2004). After inlining the constructor, the code looks like Example 8. Though temp is volatile, *temp is not, and that means that temp->x isn’t, either. Because you now understand that assignments to nonvolatile Dr. Dobb’s Journal, August 2004
data may sometimes be reordered, it is easy to see that compilers could reorder temp->x’s assignment with regard to the assignment to pInstance. If they did, pInstance would be assigned before the data it pointed to had been initialized, leading again to the possibility that a different thread would read an uninitialized x. An appealing treatment for this disease would be to volatile-qualify *pInstance as well as pInstance itself, yielding a glorified version of Singleton where all pawns are painted volatile; see Example 9. At this point, you might reasonably wonder why Lock isn’t also declared volatile. After all, it’s critical that the lock be initialized before you try to write to pInstance or temp. Well, Lock comes from a threading library, so you can assume it either dictates enough restrictions in its specification or embeds enough magic in its implementation to work without needing volatile. This is the case with all threading libraries that we know of. In essence, use of entities (objects, functions, and the like) from threading libraries leads to the imposition of “hard sequence points” in a program — sequence points that apply to all threads. For purposes of this article, we assume that such hard sequence points act as firm barriers to instruction reordering during code optimization: Instructions corresponding to source statements preceding use of the library entity in the source code may not be moved after the instructions corresponding to use of the entity, and instructions corresponding to source statements following use of such entities in the source code may not be 57
moved before the instructions corresponding to their use. Real threading libraries impose less draconian restrictions, but the details are not important for purposes of our discussion here. You might hope that the aforementioned fully volatile-qualified code would be guaranteed by the Standard to work correctly in a multithreaded environment, but it may fail for two reasons. First, the Standard’s constraints on observable behavior are only for an abstract machine defined by the Standard, and that
abstract machine has no notion of multiple threads of execution. As a result, though the Standard prevents compilers from reordering reads and writes to volatile data within a thread, it imposes no constraints at all on such reorderings across threads. At least that’s how most compiler implementers interpret things. As a result, in practice, many compilers may generate thread-unsafe code from the aforementioned source. If your multithreaded code works properly with volatile and doesn’t work without it, then either your
class Singleton { public: static Singleton* instance(); ... private: static Singleton* volatile pInstance; // volatile added int x; Singleton() : x(5) {} }; // from the implementation file Singleton* Singleton::pInstance = 0; Singleton* Singleton::instance() { if (pInstance == 0) { Lock lock; if (pInstance == 0) { Singleton*volatile temp = new Singleton; // volatile added pInstance = temp; } } return pInstance; }
Example 7: Declaring pInstance. if (pInstance == 0) { Lock lock; if (pInstance == 0) { Singleton* volatile temp = static_cast<Singleton*>(operator new(sizeof(Singleton))); temp->x = 5; // inlined Singleton constructor pInstance = temp; } }
Example 8: Inlining the constructor in Example 7. class Singleton { public: static volatile Singleton* volatile instance(); ... private: // one more volatile added static volatile Singleton* volatile pInstance; }; // from the implementation file volatile Singleton* volatile Singleton::pInstance = 0; volatile Singleton* volatile Singleton::instance() { if (pInstance == 0) { Lock lock; if (pInstance == 0) { // one more volatile added volatile Singleton* volatile temp = new volatile Singleton; pInstance = temp; } } return pInstance; }
Example 9: A glorified version of Singleton. 58
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
C++ implementation carefully implemented volatile to work with threads (less likely) or you simply got lucky (more likely). Either case, your code is not portable. Second, just as const-qualified objects don’t become const until their constructors have run to completion, volatilequalified objects become volatile only upon exit from their constructors. In the statement: volatile Singleton* volatile temp = new volatile Singleton;
the object being created doesn’t become volatile until the expression: new volatile Singleton;
has run to completion, and that means that we’re back in a situation where instructions for memory allocation and object initialization may be arbitrarily reordered. This problem is one we can address, albeit awkwardly. Within the Singleton constructor, we use casts to temporarily add “volatileness” to each data member of the Singleton object as it is initialized, thus preventing relative movement of the instructions performing the initializations. Example 10 is the Singleton constructor written in this way. (To simplify the presentation, we’ve used an assignment to give Singleton::x its first value instead of a member initialization list, as in Example 10. This change has no effect on any of the issues we’re addressing here.) After inlining this function in the version of Singleton where pInstance is properly volatile qualified, we get Example 11. Now the assignment to x must precede the assignment to pInstance, because both are volatile. Unfortunately, all this does nothing to address the first problem — C++’s abstract machine is single threaded, and C++ compilers may choose to generate threadunsafe code from source like that just mentioned, anyway. Otherwise, lost optimization opportunities lead to too big an efficiency hit. After all this, we’re back to square one. But wait, there’s more — more processors. DCLP on Multiprocessor Machines Suppose you’re on a machine with multiple processors, each of which has its own memory cache, but all of which share a common memory space. Such an architecture needs to define exactly how and when writes performed by one processor propagate to the shared memory and thus become visible to other processors. It is easy to imagine situations where one processor has updated the value of a shared variable in its own cache, but the updated value has not yet been flushed to main memory, much http://www.ddj.com
less loaded into the other processors’ caches. Such inter-cache inconsistencies in the value of a shared variable is known as the “cache coherency problem.” Suppose processor A modifies the memory for shared variable x and then later modifies the memory for shared variable y. These new values must be flushed to the main memory so that other processors see them. However, it can be more efficient to flush new cache values in increasing address order, so if y’s address precedes x’s, it is possible that y’s new value will be written to main memory before x’s is. If that happens, other processors may see y’s value change before x’s. Such a possibility is a serious problem for DCLP. Correct Singleton initialization requires that the Singleton be initialized and that pInstance be updated to be nonnull and that these operations be seen to occur in this order. If a thread on processor A performs step 1 and then step 2, but a thread on processor B sees step 2 as having been performed before step 1, the thread on processor B may again refer to an uninitialized Singleton. The general solution to cache coherency problems is to use memory barriers: instructions recognized by compilers, linkers, and other optimizing entities that constrain the kinds of reorderings that may be performed on read/writes of shared memory in multiprocessor systems. In the case of DCLP, we need to use memory barriers to ensure that pInstance isn’t seen to be nonnull until writes to the Singleton have been completed. Example 12 is pseudocode that closely follows an example presented by David Bacon et al. (see the “DoubleChecked Locking Pattern is Broken”). We
show only placeholders for the statements that insert memory barriers because the actual code is platform specific (typically in assembler). This is overkill, as Arch Robison points out (in personal communication): Technically, you don’t need full bidirectional barriers. The first barrier must prevent downwards migration of Singleton’s construction (by another thread); the second barrier must prevent upwards migration of pInstance’s initialization. These are called “acquire” and “release” operations, and may yield better performance than full barriers on hardware (such as Itainum) that makes the distinction.
Still, this is an approach to implementing DCLP that should be reliable, provided you’re running on a machine that supports memory barriers. All machines that can reorder writes to shared memory support memory barriers in one form or another. Interestingly, this same approach works just as well in a uniprocessor setting. This is because memory barriers also act as hard sequence points that prevent the kinds of instruction reorderings that can be so troublesome. Conclusion There are several lessons to be learned here. First, remember that timeslice-based parallelism on uniprocessors is not the same as true parallelism across multiple processors. That’s why a thread-safe solution for a particular compiler on a uniprocessor architecture may not be thread safe on a multiprocessor architecture, not even if you stick with the same compiler. (This is a general observation — it’s not specific to DCLP.)
Singleton() { static_cast(x) = 5; // note cast to volatile }
Example 10: Using casts to create the Singleton constructor. class Singleton { public: static Singleton* instance(); ... private: static Singleton* volatile pInstance; int x; ... }; Singleton* Singleton::instance() { if (pInstance == 0) { Lock lock; if (pInstance == 0) { Singleton* volatile temp = static_cast<Singleton*>(operator new(sizeof(Singleton))); static_cast(temp->x) = 5; pInstance = temp; } } }
Example 11: Inlining a function in Singleton. Dr. Dobb’s Journal, August 2004
59
Second, although DCLP isn’t intrinsically tied to Singleton, the use of Singleton tends to lead to a desire to “op-
timize” thread-safe access via DCLP. You should therefore be sure to avoid implementing Singleton with DCLP. If you
(a) Singleton::instance()->transmogrify(); Singleton::instance()->metamorphose(); Singleton::instance()->transmute(); (b) Singleton* const instance = Singleton::instance(); // cache instance pointer instance->transmogrify(); instance->metamorphose(); instance->transmute();
Example 13: Instead of writing code like (a), clients should use something like (b).
(or your clients) are concerned about the cost of locking a synchronization object every time instance is called, you can advise clients to minimize such calls by caching the pointer that instance returns. For example, suggest that instead of writing code like Example 13(a), clients do things like Example 13(b). Before making such a recommendation, it’s generally a good idea to verify that this really leads to a significant performance gain. Use a lock from a threading library to ensure thread-safe Singleton initialization, then do timing studies to see if the cost is truly something worth worrying about. Third, avoid using a lazily initialized Singleton unless you really need it. The classic Singleton implementation is based on not initializing a resource until that resource is requested. An alternative is to use eager initialization; that is, to initialize a resource at the beginning of the program run. Because multithreaded programs typically start running as a single thread, this approach can push some object initializations into the single-threaded startup portion of the code, thus eliminating the need to worry about threading during the initialization. In many cases, initializing a Singleton resource during single-threaded program startup (that is, prior to executing main) is the simplest way to offer fast, thread-safe Singleton access. A different way to employ eager initialization is to replace the Singleton Pattern with the Monostate Pattern (see Steve Ball et al., “Monostate Classes: The Power of One”). Monostate, however, has different problems, especially when it comes to controlling the order of initialization of the nonlocal static objects that make up its state. Effective C++ (see “References”) describes these problems and, ironically, suggests using a variant of Singleton to escape them. (The variant is not guaranteed to be thread safe; see Pattern Hatching: Design Patterns Applied by John Vlissides.)
Singleton* Singleton::instance () { Singleton* tmp = pInstance; ... // insert memory barrier if (tmp == 0) { Lock lock; tmp = pInstance; if (tmp == 0) { tmp = new Singleton; ... // insert memory barrier pInstance = tmp; } } return tmp; }
Example 12: Pseudocode that follows an example presented by David Bacon. 60
Dr. Dobb’s Journal, August 2004
http://www.ddj.com
Another possibility is to replace a global Singleton with one Singleton per thread, then use thread-local storage for Singleton data. This allows for lazy initialization without worrying about threading issues, but it also means that there may be more than one “Singleton” in a multithreaded program. Finally, DCLP and its problems in C++ and C exemplify the inherent difficulty in writing thread-safe code in a language with no notion of threading (or any other form of concurrency). Multithreading considerations are pervasive because they affect the very core of code generation. As Peter Buhr pointed out in “Are Safe Concurrency Libraries Possible?” (see
“References”), the desire to keep multithreading out of the language and tucked away in libraries is a chimera. Do that, and either the libraries will end up putting constraints on the way compilers generate code (as Pthreads already does), or compilers and other code-generation tools will be prohibited from performing useful optimizations, even on single- threaded code. You can pick only two of the troika formed by multithreading, a threadunaware language, and optimized code generation. Java and the .NET CLI, for example, address the tension by introducing thread awareness into the lan-
volatile : A Brief History
T
o find the roots of volatile, let’s go back to the 1970s, when Gordon Bell (of PDP-11 fame) introduced the concept of memory-mapped I/O (MMIO). Before that, processors allocated pins and defined special instructions for performing port I/O. The idea behind MMIO is to use the same pins and instructions for both memory and port access. Hardware outside the processor intercepts specific memory addresses and transforms them into I/O requests; so dealing with ports became simply reading from and writing to machine-specific memory addresses. What a great idea. Reducing pin count is good — pins slow down signal, increase defect rate, and complicate packaging. Also, MMIO doesn’t require special instructions for ports. Programs just use the memory, and the hardware takes care of the rest. Or almost. To see why MMIO needs volatile variables, consider the following code: unsigned int *p = GetMagicAddress(); unsigned int a, b; a = *p; b = *p;
If p refers to a port, a and b should receive two consecutive words read from that port. However, if p points to a bona fide memory location, then a and b load the same location twice and, hence, will compare equal. Compilers exploit this assumption in the copy propagation optimization that transforms b=*p; into the more efficient b = a;. Similarly, for the same p, a, and b, consider: *p = a; *p = b; The code writes two words to *p, but the optimizer might assume that *p is
memory and perform the dead assignment elimination optimization by eliminating the first assignment. So, when dealing with ports, some optimizations must be suspended. volatile exists for specifying special treatment for ports, specifically: The content of a volatile variable is unstable (can change by means unknown to the compiler); all writes to volatile data are observable, so they must be executed religiously; and all operations on volatile data are executed in the sequence in which they appear in the source code. The first two rules ensure proper reading and writing. The last one allows implementation of I/O protocols that mix input and output. This is informally what C and C++’s volatile guarantees. Java took volatile a step further by guaranteeing the aforementioned properties across multiple threads. This was an important step, but it wasn’t enough to make volatile usable for thread synchronization: The relative ordering of volatile and nonvolatile operations remained unspecified. This omission forces many variables to be volatile to ensure proper ordering. Java 1.5’s volatile has the more restrictive, but simpler, acquire/release semantics: Any read of a volatile is guaranteed to occur prior to any memory reference (volatile or not) in the statements that follow, and any write to a volatile is guaranteed to occur after all memory references in the statements preceding it. .NET defines volatile to incorporate multithreaded semantics as well, which are similar to the currently proposed Java semantics. We know of no similar work being done on C’s or C++’s volatile.
guage and language infrastructure, respectively (see Doug Lea’s Concurrent Programming in Java and Arch D. Robison’s “Memory Consistency & .NET”). Acknowledgments Drafts of this article were read by Doug Lea, Kevlin Henney, Doug Schmidt, Chuck Allison, Petru Marginean, Hendrik Schober, David Brownell, Arch Robison, Bruce Leasure, and James Kanze. Their comments, insights, and explanations greatly improved the article and led us to our current understanding of DCLP, multithreading, instruction ordering, and compiler optimizations. References David Bacon, Joshua Bloch, Jeff Bogda, Cliff Click, Paul Hahr, Doug Lea, Tom May, Jan-Willem Maessen, John D. Mitchell, Kelvin Nilsen, Bill Pugh, and Emin Gun Sirer. The “Double-Checked Locking Pattern is Broken” Declaration. http://www.cs .umd.edu/~pugh/java/memoryModel/ DoubleCheckedLocking.html. Steve Ball and John Crawford. “Monostate Classes: The Power of One.” C++ Report, May 1997. Reprinted in More C++ Gems, Robert C. Martin, ed., Cambridge University Press, 2000. Peter A. Buhr. “Are Safe Concurrency Libraries Possible?” Communications of the ACM, 38(2):117-120, 1995. http:// citeseer.nj.nec.com/buhr95are.html. Doug Lea. Concurrent Programming in Java. Addison-Wesley, 1999. Excerpts relevant to this article can be found at http://gee.cs.oswego.edu/dl/cpj/jmm.html. Scott Meyers. Effective C++, Second Edition. Addison-Wesley, 1998. Item 47 discusses the initialization problems that can arise when using nonlocal static objects in C++. Arch D. Robison. “Memory Consistency & .NET.” Dr. Dobb’s Journal, April 2003. Douglas C. Schmidt and Tim Harrison. “Double-Checked Locking.” In Pattern Languages of Program Design 3, Robert Martin, Dirk Riehle, and Frank Buschmann, editors. Addison-Wesley, 1998. http://www.cs .wustl.edu/~schmidt/PDF/DC-Locking.pdf. Douglas C. Schmidt, Michael Stal, Hans Rohnert, and Frank Buschmann. PatternOriented Software Architecture, Volume 2. Wiley, 2000. Tutorial notes based on the patterns in this book are available at http://cs.wustl.edu/~schmidt/posa2.ppt. ISO/IEC 14882:1998(E) International Standard. Programming languages — C++. ISO/IEC, 1998. John Vlissides. Pattern Hatching: Design Patterns Applied. Addison-Wesley, 1998. The discussion of the “Meyers Singleton” is on page 69.
— S.M. and A.A. DDJ
http://www.ddj.com
Dr. Dobb’s Journal, August 2004
61
When Format Strings Attack! Combating vulnerabilities Herbert H. Thompson and James A. Whittaker
I
n March 2004, a bug was reported in Epic Games’s Unreal game engine, the machine that drives such popular games as Unreal Tournament and Splinter Cell. It turns out that users could crash the server by inserting a %n character into some of the incoming packets. A couple of months earlier, a similar problem was found in the Windows FTP Server produced by FTP Server Software. It seems that with a particular character string (that contained some strategically placed %n and %s characters), you could execute arbitrary instructions on a remote host. Roll back the clock to 2000 and you find an issue was reported in Wu-Ftpd that let casual users read sensitive information from a running application by entering %x into an input string. These three applications have one thing in common — they all had formatstring vulnerabilities. Format-string vulnerabilities happen when programmers fail to specify how user data will be formatted. Any C programmer who has typed a few semicolons is familiar with the types of functions that let this kind of thing happen. The culprits are usually members of the format-string family in C and C++, which includes the printf, sprintf, snprintf, and fprintf functions. When most of us learned C, the first thing we did was to build a “Hello World” program that used a printf: printf("Hello World");
Herbert is Director of Security Technology at Security Innovation and James is a professor of computer science at the Florida Institute of Technology. They are also the coauthors of How To Break Software Security (Addison-Wesley, 2003). They can be contacted at [email protected] and [email protected], respectively. 62
We then graduated to more ambitious programs, passing a name in as a commandline argument and then printing it: //Printf_1 #include <stdio.h> int main(int argc, char *argv[]) { printf("%s", argv[1]); return 0; }
In this example, the string in quotes is a format string and the format specifier %s tells the function to read the next argument (in this case the argv[1], the first commandline argument) and print it as a string. The danger with format functions is that input is often printed without a fixed format string. For instance, in the aforementioned code, you could omit the %s format string, which would change the printf statement to: //Printf_2 #include <stdio.h> int main(int argc, char *argv[]) { printf(argv[1]); return 0; }
Now, printf blindly processes data supplied by users. Using this structure, our application is open to attack through parameters entered by users. Consider, the input string a_string. Compiling the aforementioned programs — Printf_1.exe using the %s specifier and Printf_2.exe without %s— and running them with our string yields: >Printf_1 a_string a_string >Printf_2 a_string a_string
Both applications produce the same result. If you enter the string a_string%s, however, you get this output: >Printf_1 a_string%s a_string%s >Printf_2 a_string%s a_string+ ?
The difference is that in Printf_1, you explicitly told the application to treat Dr. Dobb’s Journal, August 2004
a_string%s as a string and, thus, it was printed as-entered. In the second case, the application used the input a_string%s as the format string and, thus, it was interpreted as the string a_string followed by the format specifier %s. When compiled, pointers to the parameters to be formatted by the printf function are placed on the stack. When Printf_2 was executed, there was not a valid address to a string on the stack, thus the %s format specifier printed (as text) whatever string occupied the memory address that happened to be at the top of the stack. Additionally, it is fairly easy to crash this application and cause a denial of service by using multiple %s specifiers, which eventually read from protected memory space or an invalid address on the stack. Besides %s, other formatting characters exist that let attackers launch much more insidious attacks. A favorite of attackers is %x, which can be used to print a hex value at the top of the stack. Using multiple %x specifiers, you can look at the entire contents of the stack. This is a relatively simple attack to carry out and the result can be the exposure of sensitive data in memory including passwords, encryption keys, and other secrets. Figure 1 illustrates how the attack works. Users are prompted for some input and then the application prints that input in a future command. Users can read data from the stack by using multiple %x characters. Aside from %x and %s, %n is one of the most interesting specifiers because it actually writes something to memory. Many format-string attacks make use of the %x and %n format specifiers in combination. If you use %n without passing a variable, the application attempts to write a value — the number of bytes formatted by the format function — to the memory address stored at the top of the stack. It is this ability that may ultimately let attackers execute arbitrary commands by taking control of the application’s execution path. There are many other format specifiers you can use. Table 1 presents a list of some of the more commonly used ones. The Format Functions The printf function is a member of a wider class of functions that use format strings http://www.ddj.com
Specifier
Description
%d %u %i %x %c %s %f
Signed decimal string. Unsigned decimal string. Signed decimal string. Unsigned hexadecimal string. Character format specifier. String format specifier. Signed decimal string of the form xx.yyy. Formats a pointer to an address. Number of bytes written so far. Just inserts %.
%p %n
Figure 1: An application with a simple format-string vulnerability can be manipulated to expose data on the stack using by using %08x. for output. Functions like sprintf and fprintf are also vulnerable to format-string attacks. Table 2 lists some other common C functions that use format strings and are vulnerable to this type of attack. In addition to functions that directly format data, however, there are a few others such as syslog that can also process user data and have been exploited through format specifiers. Of the functions in Tables 1 and 2, sprintf is particularly interesting from a security standpoint because it “prints” formatted data to a buffer. Aside from the possibility of a format-string vulnerability, using this particular function can lead to buffer overflow vulnerabilities and should usually be replaced with its lengthchecking cousin snprintf. While people have been publicly exploiting buffer overruns since the late 1980s, format-string attacks have only been well understood since 2000. That year, the Common Vulnerabilities and Exposures database (CVE; http://cve.mitre.org/) listed over 20 major applications and platforms that had been exploited though these attacks. Let’s take a look at a specific instance of this vulnerability in a commercial application. The Windows FTP Server available from FTP Server Software (http://srv.nease .net/) is open to format-string attacks through the username parameter (see http://www.securityfocus.com/archive/1 /349255/). For example, if you enter %s as the “User,” then the server crashes when it tries to interpret the value at the top of the stack as a memory address (because it attempts to read from this bogus address, as in Figure 2). The good thing about format-string vulnerabilities is that they are relatively easy to find in a source-code audit. Any variable that contains data that is either directly or indirectly influenced by the user should be processed using a format string that dictates how that data will be interpreted. A careful analysis of code can usually find such vulnerabilities. It is important, though, to be familiar with functions that use formatted outhttp://www.ddj.com
%%
Figure 2: Windows FTP Server crashes when we enter “%s” as the “User.”
Table 1: Common specifiers for format functions.
Function
Purpose
fprintf printf sprintf snprintf
Prints a formatted string to a file. Prints a formatted string to stdout. Prints a formatted string to a string buffer. Prints a formatted string to a string buffer and the programmer can specify the length of data to be printed to the destination buffer.
Table 2: Common formatting functions in C vulnerable to format-string attacks. put. Table 2 is a good starting point but there are also some OS-specific functions like syslog( ) that must also be scrutinized. There are also some automated sourcescanning tools for C that can make the process of searching through your source code easier. RATS, the Rough Auditing Tool for Security, is a free source-code scanner produced by Secure Software (http://www.securesw.com/) that is capable of scanning C, C++, Perl, PHP, and Python source code. The ITS4 security scanner by Cigital (http://www.cigital .com/) is also free and can be used to scan C and C++ for related issues. Flawfinder (http://www.dwheeler.com/flawfinder/) is another GPL vulnerability finder that scans C and C++ code for a variety of security problems including format strings. If your focus is just finding format-string vulnerabilities, the pscan tool (http:// www.striker.ottawa.on.ca/~aland/pscan/) is an open-source tool that focuses exclusively on finding format-string vulnerabilities in C code. From a black-box-testing perspective these vulnerabilities can be unearthed by including specifiers such as %x, %s, and %n in input fields. The symptoms of failure when using a string of %xs characters is likely to be “garbage” data returned to users in messages that quote the input string. A more drastic approach is to place several %s characters into the input field. If a format-string vulnerability exists, this will cause the application to read from successive addresses at the top of the stack. Since some of the data on the stack may be the contents of other variables (like a string), trying to convert this data to a memory address and then reading from that address is likely to result in an “Access Violation” error, which will cause the application to crash. Once they have been located, fixing format-string vulnerabilities is relatively Dr. Dobb’s Journal, August 2004
easy: Use a fixed format string! For example, vulnerable calls are likely to look something like this: printf(user_data); fprintf(stdout, user_data); snprintf(dest_buffer, size, user_data);
If you want user data to be displayed, processed, or saved all as a string for the above functions, that can be fixed using %s as shown here: printf("%s", user_data); fprintf(stdout, "%s", user_data); snprintf(dest_buffer, size, "%s", user_data);
With these modifications, we have told the format functions to specifically treat user input as a string. By explicitly specifying the format of user data, we can protect against application manipulation through format characters. Beyond Format Strings In the previous installment of this series (see “String-Based Attacks Demystified,” DDJ, June 2004), we took a look at some of the ways attackers manipulate input strings to take control of software. Format-string vulnerabilities represent an important category of string vulnerabilities. The tie that binds all these problems together is an implicit trust of user data and, thus, a failure to validate such data. The solution — validating user input, of course! Whenever your application reads user data, think about the following: How will this data be used? What are the escape characters, strings, commands, and reserved words that may be interpreted as more than just text? We may not be able to build an impenetrable fortress, but we can at least lock the castle gate. DDJ 63
PROGRAMMER’S TOOLCHEST
The Subversion Version-Control Program An update-to-date version-control package Jeff Machols
W
hen it comes to version control in the open-source arena, there’s no question that CVS rules. However, this reign may be ending. The Subversion version-control program provides all the benefits and features of CVS, along with many improvements. Since its release in early 2004, Subversion (http:// subversion.tigris.org/) has already been adopted by projects such as the Apache Software Foundation Directory Project (http://incubator.apache.org/directory/). Because Subversion contains all the features of CVS, its creators designed the interface to match CVS as much as possible. For instance, the command-line interface is named “svn” and the basic operations have the same names. In addition, “cvs up” is “svn up,” thereby easing migration to Subversion. Additionally, the commandline interface binaries are available for most operating systems. Subversion also uses the copy-modify-merge paradigm, so existing development processes built around CVS do not have to be changed. Although the Subversion toolset is small, there are a few key tools. Windows users who want a GUI can get TortoiseSVN (http://tortoisesvn.tigris.org/), a plugin to Explorer that embeds SVN commands on the right-click pull down menu. Likewise, for the Eclipse IDE, there is a plugin called Subclipse (http://subclipse.tigris.org/). Jeff is the UNIX systems manager for CitiStreet and cofounder of the Apache Directory Project. Jeff can be contacted at [email protected]. 64
Architecture I’m the first to say that CVS is a great application — its longevity alone is a testament to this. However, it is starting to show its age. Consequently, the Subversion creators realized that to fix some of CVS’s deficiencies, a rewrite was required. The first key point in Subversion’s architecture is the use of the Apache Portable Runtime (APR) libraries, a set of libraries that abstract away OS-specific calls in the C code, providing a Java-like “write once, run anywhere” model. This eliminates the need for lots of excruciating compiler directives in the code, streamlining development and source-code maintenance. Subversion’s design is based on three distinct components — the client, network, and filesystem. Each component has an interface that lets you focus on one area of the application. You can write a new clientside feature without having to deal with the guts of the filesystem implementation. The interface provides a means for applications to be easily built on top of Subversion, such as GUIs and plug-ins. This modularity allows for different components to be snapped in without disrupting the rest of the application. And while there are other design and architectural improvements in Subversion, the most beneficial is the overall open-source community orientation. Creating an environment that lets more contributors easily get involved expedites the development of the project into a robust and stable production application. Atomic Commits One of Subversion’s most powerful enhancements is the concept of atomic commits. Logically, CVS treats a commit as a set of individual check-ins, one for each file changed. Each file has its own incremental revision number and log message. Subversion, on the other, treats each commit as one change to the repository and has a global reversion number. This is the part of Subversion that takes some getDr. Dobb’s Journal, August 2004
ting used to. The reversion number on a file or entry in the repository is on a percommit basis, not per file. The global reversion number starts with 0 and is incremented as a whole number for each commit on the repository. Each entry changed for that commit will have its new revision number set to the global revision number. In Subversion, revision numbers in a file are not necessarily sequential (1.1, 1.2, 1.3). Your revision numbers will look more like: r1, r10, r25. Since a commit is treated as a new snapshot of the repository, it captures adds, moves, and copies objects, as well as changes to an object. This is a powerful mechanism that lets you retrieve the state of the repository at any given time with ease. For instance, consider this sequence of events: 1. A new repository is created. Two files, foo.c and bar.c, are created and committed. 2. You make a change to foo.c and commit. 3. Someone else makes a change to both files and commits. Now look at the logs for the two files in Figure 1. Since foo.c was changed each time a commit was done on the repository, it has a log entry for each revision. The file bar.c was not changed during the commit of revision 2, so it will not contain a log entry for r2. Revision 2 of the repository contains a version of bar.c that’s identical to r1. Since the identical copies are links inside the repository, it does not use extra space. In terms of storage, in fact, Subversion is actually cheaper than CVS because the revision numbers and log messages are stored once globally, instead of one per file. This also makes modifications to log messages easier because it only needs to be changed in one place. Directories and Metadata A longtime complaint with CVS is its lack of functionality around versioning directories. http://www.ddj.com
This deficiency makes it difficult to move or rename files and directories. The only way to move a file in CVS is to remove it and add it to the new location. This causes problems such as accessing history, tags, and previous releases. Subversion addresses these problems; in fact, according to some of the documentation, this was one of the reasons for writing it. There are commands — copy, delete, mkdir, and move; or, for the UNIX-oriented, cp, rm, mkdir, and mv — that allow file and directory manipulation from the working directory that, on a commit, is updated in the repository. These actions preserve the history of the object, including the hierarchy from previous revisions. For example, if you move foo.c to file.c and commit to revision 10, a checkout of revision 9 contains foo.c in its original location. This is helpful when restoring a previous version and trying to compile after the directory structure has changed. Subversion has a much more robust metadata implementation than CVS called a “property”— basically a hash table that exists for each element in the repository, which contains a name and value. The name and value can be any text. Properties are also versioned, which gives another level of traceability and history for your repository. A great use for properties is your software promotion process. For instance, you can create a property called “state,” which has the value development, test, QA, or production. Automated scripts can easily be generated to build code based on a particular state. You can also go back to a specific revision and see what the property was set to. Along with the custom properties, Subversion has a set of built-in properties for common tasks. These include setting up your ignore list, setting the system execute bit on the file, MIME type, keyword substitutions, and EOL style.
your main development path. The branches and tags directories should clearly contain the project’s branched and tagged paths. Your layout will look something like Figure 2. There are no rules as to where a project root needs to be; in fact, they do not all need to be at a level in the tree. These should be located where it makes sense to the layout of the project. Also be aware that the revision number is global to the repository, not to an individual project root.
Protocol and Backend There are two options for the network protocol that Subversion can use. The first is a simple, lightweight SSH tunnel, which is provided by a server-side daemon called “svnserve.” The other approach is to use the Apache HTTP server. Subversion has an Apache module called “mod_dav_svn,” which uses the WebDAV and DeltaV protocols over HTTP for the client/server communication. There are several advantages to using this protocol, the first being stability and acceptance. You don’t have to
$ svn log bar.c ----------------------------------------------------------------r3 | alex| 2004-02-25 14:14:31 -0500 (Wed, 25 Feb 2004) | 2 line Added license header to all files ----------------------------------------------------------------r1 | jeff | 2004-02-25 14:12:51 -0500 (Wed, 25 Feb 2004) | 2 line initial version $ svn log foo.c ----------------------------------------------------------------r3 | alex | 2004-02-25 14:14:31 -0500 (Wed, 25 Feb 2004) | 2 lines Added license header to all files ----------------------------------------------------------------r2 | jeff | 2004-02-25 14:13:16 -0500 (Wed, 25 Feb 2004) | 2 lines Added main function ----------------------------------------------------------------r1 | jeff | 2004-02-25 14:12:51 -0500 (Wed, 25 Feb 2004) | 2 lines initial version -----------------------------------------------------------------
Figure 1: Sample log.
Branches and Tags Compared to CVS and most other versioncontrol systems, branching and tags are implemented differently in Subversion. In Subversion, a branch is simply a copy of the files in a different subdirectory. Just like CVS, you use the merge to sync your changes with the HEAD. While you can put the branch subdirectory anywhere in your working copy, a standard has been developed that has been adapted by most Subversion users. Before addressing these standards, you have to decide if the repository will contain multiple projects or if there will be a 1:1 ratio. For now, assume you have one repository that contains three projects: GUI, Backend, and Core. Each project in the top-level directory should contain the subdirectories trunk, branches, and tags. The trunk should be http://www.ddj.com
Dr. Dobb’s Journal, August 2004
65
(continued from page 65) worry about convincing your network admin to open up a new port or suffer through a long debug process of a custom protocol. Subversion is also able to take advantage of existing features built into this protocol such as authentication though the HTTP server and version browsing with WebDAV. Instead of writing a proprietary backend, Subversion used Berkeley DB from Sleepy Cat Software (http://www.sleepycat.com/) for its data store. Once again, the underlying principle of using existing technology is a key factor. There was not a real benefit to going through the trouble of writing a custom backend. Berkeley DB provides features such as hot backups, replication, journals for recovery, and efficient memory usage. Migrating From CVS Once you are ready to move from CVS, there are several decisions that need to be made. The first decision is whether to keep revision history and log messages. If it seems like an opportune time to baseline your code, maybe you won’t need the history. If this is the case, a simple copy into your Subversion working directory and commit loads the repository. In the more likely case you decide to keep your history, the next choice is whether you bring branches and tags from CVS into the Subversion repository. Unfortunately, due to constraints in the conversion mechanism, this is all or nothing. Either you get all the tags and branches or just the trunk. Subversion has a tool to import an existing CVS repository, including history, branches, and tags. This process can get a little dicey depending on your CVS repository. Basically, the more straightforward your CVS implementation, the higher chance of success in the conversion process. If you have lots of branches and have moved or deleted entries from the CVS repository, you may be in for a long night. The conversion tool is a Python script called “cvs2svn.py.” This script is packaged with Subversion but maintained separately. This utility is lagging behind the Subversion code in terms of production worthiness (which could be one of the reasons it is separate). The good news is that the tool is being improved at a quick pace. I recommend getting the latest version before trying to run the conversion. You can check out the script and documentation from collabnet: svn co http://svn.collab.net/repos/cvs2svn
Once you have the script, take a look at the README and run cvs2svn.py--help. It shouldn’t take long to skim through this and be ready to give the conversion a try. The script has the ability to perform a dry run, making sure it can properly parse the CVS repository without actually doing the http://www.ddj.com
/ GUI/ trunk/ branches/ tags/ Backend/ trunk/ branches/ tags/ Core/ trunk/ branches/ tags/
Figure 2: Branch and tag directory layout. conversion. It is a good idea to try this and see if you have any issues before going through the actual conversion. The success and speed of the conversion will depend on many factors. If you hit a snag, here are some suggestions: • If possible, run the conversion on the box your CVS repository is on. If you cannot do this, see if you can get a tarball of the repository (maybe from a backup) and create it on your local machine. If you have a big repository, it is worth the effort. • Leave branches and tags behind. This is where I have run into most of my problems. If you are getting stuck on the branches, try running the conversion with the --trunk- only option. If you need the source from the branches, manually create them in Subversion. You lose the branch history, but not the HEAD history. • Make sure you have plenty of disk space. Conversion creates lots of temporary files. • Try the conversion first on a test Subversion repository. If you’re adding to an existing Subversion repository, you do not want to get halfway through and have an error put your existing code in a bad state. • Create a dump file first. The cvs2svn script can be run to create a Subversion dump file using the --dump-only option. This dump file can be imported into a Subversion repository. This is especially useful if you run into problems on the Subversion side. Once you have the dump of the CVS repository, you can use this file instead of reading from CVS each time. Conclusion With the Subversion 1.0 production release and increase in supporting tools, it won’t be long before CVS steps aside and lets Subversion become the de facto standard open-source version-control software. In fact, the CVS web site has a link to Subversion’s homepage. The question for you is when — not if— do you migrate from CVS to Subversion. DDJ Dr. Dobb’s Journal, August 2004
67
EMBEDDED SYSTEMS
Runtime Monitoring & Software Verification Automating a labor-intensive process Doron Drusinsky
S
oftware verification has been a concern for more than 20 years. However, recent security flaws in mainstream products (Internet Explorer, for instance) and software failures (such as the French Ariane 5 rocket fiasco) have brought renewed focus on the topic. Runtime monitoring is one lightweight formal software verification technique. However, this technique is also used for nonverification purposes, including temporal business rule checking and temporal rules in expert systems. In this article, I examine the concept of runtime monitoring, focusing on its application to robust system verification. Unlike conventional testing methods, which are human intensive, slow, expensive, and error prone, formal methods make automated computer-based verification possible. However, most formal methods (model checking and theorem proving, for instance) suffer from limited acceptance because of factors ranging from computational complexity to the high level of mathematical skills needed to be used effectively. Runtime monitoring, which is based on formal specifications, is similar to conventional testing in that it depends on actual system execution, either in test mode or in the field. The benefits of runtime monitoring over formal methods such as model checking are:
Doron is the founder of Time Rover and author of Temporal Rover and DBRover. He can be contacted at http://www.time-rover.com/. 68
• Scalability. Runtime monitoring scales well in that they can be applied to large systems using existing programming languages such as C, C++, and Java. • Expressiveness. Runtime monitoring techniques can be used to check properties that include real-time constraints and time-series constraints Still, the main benefit of runtime monitoring over conventional testing is that it can be automated via executable specifications. In addition, runtime monitoring can be used beyond the testing phase to monitor the system and also to recover from runtime-specification errors.
Runtime Monitoring Using Executable Specifications A formal specification is a description of what the system should and should not do, using a language that computers can understand and ultimately execute (see the accompanying text box entitled “Formal Specifications”). Runtime monitoring can be thought of as the process of executing formal specifications as a real computer program. Runtime monitoring primarily focuses on the execution and evaluation of a particularly difficult aspect Dr. Dobb’s Journal, August 2004
of formal specification — namely, ordering and temporal relationships between events, conditions, and other artifacts of computations. Of particular interest is online runtime monitoring, where monitoring is performed for very long and potentially neverending computations. Consider, for example, an infusion pump control. The infusion pump consists of six conditions: (infusion) begin, (infusion) end, keyPressed, valveOpen (where valveClosed = !valveOpen), pumpOn (where pumpClosed = !pumpOn), and alarm. The pump operates (pumpOn is True) in intervals between a begin and end that coincides with the valve being closed for more than 10 seconds. For every such interval, a human-induced keyPressed must be repeatedly sensed within two-minute intervals; otherwise, an alarm sounds within 10 seconds. Following an alarm, a subsequent keyPressed event terminates the alarm. Listing One is Java code for an infusion pump controller within a simulation wrapper. In addition, Listing One includes three embedded temporal logic assertions written as source-code comments. They correspond to the following natural language assertions: • Assertion MUST_SHUT_PUMP. An end sensed after 10 or more continuous seconds of valveClosed must force pump off. • Assertion NEVER_SHUT_PUMP_TOO _SOON. Never shut pump off unless valve is closed for at least 10 continuous seconds. • Assertion NO_KEYPRESSED_FORCES _ALARM. While pump is on, if keyPressed does not occur within two minutes, then alarm should sound within 10 seconds afterwards. The two primary techniques for monitoring these requirements during the execution of the infusion pump software are http://www.ddj.com
in-process monitoring and remote monitoring. In-process monitoring resembles conventional programming language assertions such as those in C, C++, or Java. Unlike conventional assertions, however, in-process runtime monitoring supports the verifications of complex requirements that assert over time and order, as with the three infusion pump assertions. In fact, conventional assertions are comparable to temporal logic assertions where the only allowable temporal operator is the Always operator. The infusion pump temporal assertions in Listing One are preprocessed for in-process monitoring using a commercial temporal logic code generator (in this case, Temporal Rover, developed by my company), which replaces the comment-based temporal assertions of Listing One with executable Java, C, C++, or Matlab code. It is this generated code that performs runtime, in-process monitoring for the infusion pump controller. Listings Two and Three are excerpts of the infusion pump simulation report, including runtime-monitoring printouts (the complete report is available electronically; see “Resource Center,” page 5). In this example, runtime monitoring uncovered the following bugs: • The infusion pump controller, after two minutes of no key-press, goes into an alarm-generation mode where it misses the detection of a shutdown (end) command. Uncovered by Test #1 and assertion MUST_SHUT_PUMP. • The infusion pump controller, while counting 10 seconds of valveClosed, misses the event valveOpen and loses synchronization with the state of the valve. Consequently, it permits pump shutoff, although the valve has not been closed for 10 continuous seconds. Uncovered by Test #2 and assertion NEVER_SHUT_PUMP_TOO_SOON. With remote monitoring, special probing code (automatically generated by the runtime monitoring tool) is used instead of the temporal assertion comments. These probes access changes to basic Boolean events and conditions of interest, such as keyPressed, alarm, and valveOpen, and feed the runtime monitor with the information it needs to execute the formal specifications. With remote monitoring, the formal specification rules are executed remotely within the remote monitor. Advantages of remote monitoring over in-process monitoring include: Remote monitors have lower impact on the realtime performance of target systems, they have almost no memory footprint on the target system, and they serve as a central repository of requirements and assertions. http://www.ddj.com
Runtime monitoring is applicable early in the design process using prototyping tools such as Matlab and modeling methodologies such as the UML. Figure 1, for example, is a UML statechart for the infusion pump with assertions embedded in states. It is possible then, using commercial state-
[] Pump-Off on entry/alarm=false
chart tools, to invoke a runtime monitoring code generator for these assertions during statechart code generation, thereby generating statechart code that is armor plated with executable temporal assertions. Following early phase prototyping, runtime monitoring is also applicable on the
!alarm Until begin
[end] [begin] [valveOpen]
[keyPressed] Wait-ForKeyPressed
tm(10) CLS
OPN [valveClosed]
[keyPressed]/alarm=false tm(120) Alarm /alarm=true
/*following every keyPressed, no alarm for two minutes*/ Always (keyPressed=> Always<=120 !alarm)
/*If valve closed for more than 10 seconds and begin follows, then pump-off Y valveClosed Until>=10 begin => valveClosed Until>=10 (begin && Next in-PumpOff)
Alarm Necessary
alarm Until keyPressed
Eventually<=10 alarm
Figure 1: Statechart for the infusion pump control with embedded formal specification assertions.
Formal Specifications
T
emporal logic is a prominent formal specification language for reactive systems. Linear time Temporal Logic (LTL) is the variant temporal logic most suitable for runtime monitoring. The syntax of LTL adds eight operators to the AND, OR, XOR, and NOT of propositional logic. Four of the operators deal with the future —Always in the Future, Sometime in the Future, Until, and Next Cycle. Four more address the past —Always in the Past, Sometime in the Past, Since, and Previous Cycle. Metric Temporal Logic (MTL) enhances LTL’s capability. With it, you can define upper and lower time constraints and time ranges for all LTL temporal operators. By imposing relative- and real-time constraints on LTL statements, MTL lets you use LTL to verify real-time systems. An example showing a relative-time upper bound in MTL is: Always <10(readySignal Implies Next ackSignal), which says: Always, within the next 10 cycles, readySignal equals 1 implies that one cycle later ackSignal equals 1. readySignal and ackSignal are propositional logic expressions, and a cycle is an LTL time unit, which is a usercontrolled quantity. For example, the in-
Dr. Dobb’s Journal, August 2004
fusion pump in Listing One was designed to operate and evaluate assertions at a rate of 1 Hz. The time constraint is relative in that it is counted in clock cycles. Another example is Alwaystimer1 [5,10] (readySignal Implies Eventuallytimer2 >= 20 ackSignal), which reads: Always, between 5 and 10 timer1 real-time units in the future, readySignal equals 1 implies that eventually, at least 20 timer2 real-time units further in the future, ackSignal equals 1. Here, two realtime constraints are specified by timer1 and timer2. A separate statement maps these timers to system calls, system clocks, or other counting devices. Some code generators further extend LTL by adding counting operators (see “The Temporal Rover and the ATG Rover,” by D. Drusinsky, Proceedings of the Spin2000 Workshop, Springer Verlag Lecture Notes in Computer Science). Another important extension is the support for asserting about time-series related properties, such as stability and monotonicity, as in an automotive cruise control system when required to sustain 98 percent speed stability while cruising. — D.D.
69
cross-platform simulation and testing level and eventually on an embedded target for purposes of monitoring in the field. The Jet Propulsion Lab, for instance, has applied runtime monitoring on a crossplatform simulator to verify flight code for the Deep Impact project (see “Applying Runtime Monitoring to the Deep-Impact Fault Protection Engine,” by D. Drusinsky and G. Watney, 28th IEEE/NASA Software Engineering Workshop, 2003). Considering the garbage-in/garbage-out effect that poor specifications (formal and informal) have on the quality of software, it is desirable to perform thorough simulations of formal specification requirements. Some commercial runtime monitors provide built-in temporal rule simulators. Figures 2(a) and 2(b) illustrate the simulation of the NO_KEYPRESSED_FORCES_ALARM assertion under two scenarios. Simulation is also necessary for the exchange of formal requirements within development teams or for purposes of distributed research and development, where teams need to share both the formal requirements and all mutually acceptable simulation scenarios for those requirements. Figure 2: Simulation of the NO_KEYPRESSED_FORCES_ALARM temporal assertion under two scenarios.
70
Dr. Dobb’s Journal, August 2004
DDJ
http://www.ddj.com
Listing One import java.io.*; // a little simulation and assertion wrapper for the InfusionPump public class InfusionPumpMain { static Sim sim; public static void main(String argv[]) { sim = new Sim(); sim.run(); } } //========================================================== class Sim { boolean begin = false; Alarm alarm = new Alarm(false); boolean end = false; boolean keyPressed = false; boolean valveOpen = false; int globalTime; InfusionPump ip; //A keyword required for the TemporalRover code generator //TRDecl void resetAssertions() { //A keyword required for the TemporalRover code generator //TRReset } void run() { globalTime = 0; //* Test 1: begin.valveClosed(x 11).valveOpen(110).valveClosed(x 11).end // Pump should shut off System.out.println("************ Test1 ********"); globalTime = 0; ip = new InfusionPump(); resetAssertions(); begin = true; fireInfusionPump(ip); // time= 0 fireInfusionPump(ip); // no interesting event, time= 1 fireInfusionPump(ip); // no interesting event, time= 2 fireInfusionPump(ip); // no interesting event, time= 3 fireInfusionPump(ip); // no interesting event, time= 4 fireInfusionPump(ip); // no interesting event, time= 5 fireInfusionPump(ip); // no interesting event, time= 6 fireInfusionPump(ip); // no interesting event, time= 7 fireInfusionPump(ip); // no interesting event, time= 8 fireInfusionPump(ip); // no interesting event, time= 9 fireInfusionPump(ip); // no interesting event, time= 10 fireInfusionPump(ip); // no interesting event, time= 11 valveOpen = true; fireInfusionPump(ip); // time = 12 valveOpen = true; fireInfusionPump(ip); ... 110 cycles with valveOpen = true; // time = 122 fireInfusionPump(ip ... 11 cycles with valveOpen = false; end = true; fireInfusionPump(ip); fireInfusionPump(ip); // pump should be off! System.out.println("************ End Test1 ********"); //**** Test 1 end //* Test 2: begin.valveClosed(x 2).valveOpen(x 6).valveClosed(x 5).end // Pump should NOT shut off (no continuous 10 seconds of valveClosed) System.out.println("************ Test2 ********"); globalTime = 0; ip = new InfusionPump(); resetAssertions(); begin = true; fireInfusionPump(ip); // time= 0 fireInfusionPump(ip); // no interesting event, time= 1 fireInfusionPump(ip); // no interesting event, time= 2 valveOpen = true; fireInfusionPump(ip); // time =3 valveOpen = true; fireInfusionPump(ip); // time= 4 valveOpen = true; fireInfusionPump(ip); // time= 5 valveOpen = true; fireInfusionPump(ip); // time= 6 valveOpen = true; fireInfusionPump(ip); // time= 7 valveOpen = true; fireInfusionPump(ip); // time= 8 // now valveClosed = true; fireInfusionPump(ip); // time= fireInfusionPump(ip); // time= fireInfusionPump(ip); // time= fireInfusionPump(ip); // time= fireInfusionPump(ip); // time=
9 10 11 12 13
end = true; fireInfusionPump(ip); // time= 14, pump should NOT shut down System.out.println("************ End Test2 ********"); //**** Test 2 end } void fireInfusionPump(InfusionPump ip) { System.out.println(""); System.out.print("*** Infusion Pump: time=" + (globalTime++)); System.out.print("; inputs: begin="+begin + ", end="+end); System.out.println(",keyPressed="+keyPressed + ",valveOpen="+valveOpen); boolean valveClosed = !valveOpen; ip.fire(begin, alarm, end, keyPressed, valveOpen); /*********************** Assertions ***********************************/ /* TRBegin //Global assertions; assertions for the entire infusion pump. // The syntax used is the TemporalRover syntax. // Assertion "MUST_SHUT_PUMP": valve closed for more than 10 // seconds followed by an end event must force pump off TRAssert { Always ( {valveClosed} -> Not ({valveClosed} Until_>=10_ ({end} And Not Next {ip.isPumpOff()})) )} => // now come the actions
http://www.ddj.com
{System.out.println("Assertion MUST_SHUT_PUMP: SUCCESS");$ // custom action, performed whenever assertion succeeds System.out.println("Assertion MUST_SHUT_PUMP: FAIL");$$ // custom action, performed whenever assertion fails System.out.println("Assertion MUST_SHUT_PUMP: DONE");} // custom action, performed whenever assertion result becomes immutable // Assertion "NEVER_SHUT_PUMP_TOO_SOON": never shut pump off unless valve is closed // for at least ten continuous seconds TRAssert { Next Always ( Not ({valveClosed} And Previous{valveOpen} And Eventually_<10_{ip.isPumpOff()}) )} => // now come the actions {System.out.println("Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS");$ System.out.println("Assertion NEVER_SHUT_PUMP_TOO_SOON: FAIL");$$ System.out.println("Assertion NEVER_SHUT_PUMP_TOO_SOON: DONE");} // Assertion "NO_KEYPRESSED_FORCES_ALARM": while in pump is on, if // keyPressed does not occur within two minutes // then alarm should sound within 10 seconds afterwards TRAssert { Always ( {ip.isPumpOn()} -> ((Always_<120_ {!keyPressed && ip.isPumpOn()}) -> (Eventually_[120,130]_{alarm.booleanValue()}) ) )} => // now come the actions {System.out.println("Assertion NO_KEYPRESSED_FORCES_ALARM: SUCCESS");$ System.out.println("Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL");$$ System.out.println("Assertion NO_KEYPRESSED_FORCES_ALARM: DONE");} TREnd*/ /*********************** End Assertions ******************************/ begin = false; end = false; keyPressed = false; valveOpen = false; waitOneSec(); // firing of infusion pump controller at a 1Hz frequency } private void waitOneSec() { Thread t = new Thread(); try { t.sleep(1000); } catch (java.lang.InterruptedException e) { System.err.println(e); return; } } } /* end class */ //================================================= class Timer extends Thread { private int m_nSecCounter; private boolean m_isTimeout; Timer(int nSec) { m_nSecCounter = nSec; m_isTimeout = false; } public void run() { try { sleep(m_nSecCounter*1000); } catch (java.lang.InterruptedException e) { System.err.println(e); return; } m_isTimeout = true; } boolean isTimeout() { return m_isTimeout; } } /* end class */ //=============================================== class InfusionPump { public static final int TR_CONC_LEVEL_INFUSIONPUMP = 3; private static final int DONT_CARE = 7; private static final int DUMMY = 6; private static final int DUMMYNONREST = 0; public static final int St_InfusionPump_Alarm_Necessary = 0;// mapped to PS[0] public static final int St_InfusionPump_Alarm = 1;// mapped to PS[0] public static final int St_InfusionPump_OPN = 3;// mapped to PS[0] public static final int St_InfusionPump_CLS = 4;// mapped to PS[0] public static final int St_InfusionPump_PumpOff = 5;// mapped to PS[0] public static final int St_InfusionPump_WaitForKeyPressed = 0;// mapped to PS[1] private int[] PS = new int[TR_CONC_LEVEL_INFUSIONPUMP]; private int[] NS = new int[TR_CONC_LEVEL_INFUSIONPUMP]; private Timer timer_10; private Timer timer_120; // constructor InfusionPump() { PS[0] = St_InfusionPump_PumpOff; NS[0] = St_InfusionPump_PumpOff; PS[1] = DUMMY; NS[1] = DUMMY; PS[2] = DUMMY; NS[2] = DUMMY; } /* @param inputs: begin, end, keyPressed, valveOpen; output: alarm */ void fire(boolean begin, Alarm alarm, boolean end, boolean keyPressed, boolean valveOpen) { int TR_i; for( TR_i=0; TR_i Pump ON! waiting for operator keypress ***"); NS[0] = St_InfusionPump_CLS; NS[1] = St_InfusionPump_WaitForKeyPressed; NS[2] = 0; tmCount(10);
Dr. Dobb’s Journal, August 2004
(continued on page 72) 71
(continued from page 71) tmCount(120); } } } if(PS[1] == St_InfusionPump_WaitForKeyPressed) { if(tm(120)) { if((NS[0] == DONT_CARE) && (NS[1] == DONT_CARE) && (NS[2] == DONT_CARE)) { System.out.println(" ---> 2 minutes elapsed without operator key-press; generate alarm ***"); NS[0] = St_InfusionPump_Alarm_Necessary; NS[1] = DUMMY; NS[2] = DUMMY; } } } if(PS[0] == St_InfusionPump_Alarm_Necessary) { System.out.println(" ---> alarm ON***"); alarm.set(true); if((NS[0] == DONT_CARE) && (NS[1] == DONT_CARE) && (NS[2] == DONT_CARE)) { NS[0] = St_InfusionPump_Alarm; NS[1] = DUMMY; NS[2] = DUMMY; } } if(PS[0] == 2) { if(end) { if((NS[0] == DONT_CARE) && (NS[1] == DONT_CARE) && (NS[2] == DONT_CARE)) { System.out.println(" ---> Pump OFF! ***"); NS[0] = St_InfusionPump_PumpOff; NS[1] = DUMMY; NS[2] = DUMMY; alarm.set(false); } } } if(PS[0] == St_InfusionPump_Alarm) { if(keyPressed) { if((NS[0] == DONT_CARE) && (NS[1] == DONT_CARE) && (NS[2] == DONT_CARE)) { System.out.println(" ---> alarm OFF, waiting for operator key-press***"); alarm.set(false); NS[0] = St_InfusionPump_CLS; NS[1] = St_InfusionPump_WaitForKeyPressed; NS[2] = 0; tmCount(10); tmCount(120); } } } if(PS[0] == St_InfusionPump_OPN) { if(!valveOpen) { if(NS[0] == DONT_CARE) { System.out.println(" ---> valve is closed ***"); NS[0] = St_InfusionPump_CLS; tmCount(10); } } } if(PS[1] == St_InfusionPump_WaitForKeyPressed) { if(keyPressed) { if(NS[1] == DONT_CARE) { System.out.println(" ---> detected key-press ***"); NS[1] = St_InfusionPump_WaitForKeyPressed; tmCount(120); } } } if(PS[0] == 2) { if(valveOpen) { if(NS[0] == DONT_CARE) { System.out.println(" ---> valve is open ***"); NS[0] = St_InfusionPump_OPN; } } } if(PS[0] == St_InfusionPump_CLS) { if(tm(10)) { System.out.println(" ---> done counting 10 seconds of valveclosed before pump shut down ***"); if(NS[0] == DONT_CARE) { NS[0] = 2; } } } /* assigning next state to present state */ for( TR_i=0; TR_i
72
case 10: timer_10 = new Timer(10); timer_10.start(); break; case 120: timer_120 = new Timer(120); timer_120.start(); break; } } boolean isPumpOff() { return PS[0] == St_InfusionPump_PumpOff; } boolean isPumpOn() { return PS[0] != St_InfusionPump_PumpOff; } boolean inState_WaitForKeyPressed() { return PS[1] == St_InfusionPump_WaitForKeyPressed; } } /* end class */ //=================================================== class Alarm { private boolean m_value; Alarm(boolean val) { m_value = val; } void set(boolean newVal) { m_value = newVal; } boolean booleanValue() { return m_value; } }
Listing Two *** Infusion Pump: time=0; inputs: begin=true, *** end=false, keyPressed=false, valveOpen=false ---> Pump ON! waiting for operator key-press *** Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: FAIL Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL *** Infusion Pump: time=1; inputs: begin=false, *** end=false, keyPressed=false, valveOpen=false Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL . . . *** Infusion Pump: time=133; inputs: begin=false, *** end=true, keyPressed=false, valveOpen=false Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL *** Infusion Pump: time=134; inputs: begin=false, *** end=false, keyPressed=false, valveOpen=false Assertion MUST_SHUT_PUMP: FAIL Assertion MUST_SHUT_PUMP: DONE <--- bug #1 caught Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL ************ End Test1 ********
Listing Three *** Infusion Pump: time=10; inputs: begin=false, *** end=false, keyPressed=false, valveOpen=false ---> done counting 10 seconds of valve-closed before pump shut down *** Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL *** Infusion Pump: time=11; inputs: begin=false, *** end=false, keyPressed=false, valveOpen=false Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL *** Infusion Pump: time=12; inputs: begin=false, *** end=false, keyPressed=false, valveOpen=false Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL *** Infusion Pump: time=13; inputs: begin=false, *** end=false, keyPressed=false, valveOpen=false Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: SUCCESS Assertion NO_KEYPRESSED_FORCES_ALARM: FAIL *** Infusion Pump: time=14; inputs: begin=false, *** end=true, keyPressed=false, valveOpen=false ---> Pump OFF! *** Assertion MUST_SHUT_PUMP: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: FAIL Assertion NEVER_SHUT_PUMP_TOO_SOON: DONE <--- bug #2 caught Assertion NO_KEYPRESSED_FORCES_ALARM: SUCCESS
DDJ Dr. Dobb’s Journal, August 2004
http://www.ddj.com
PROGRAMMING PARADIGMS
The Omega Man
Michael Swaine
I
t will come as no surprise to regular readers of DDJ that your “Programming Paradigms” columnist is fascinated with the idea of the universe being one big computation. That is, a program plus the computer it runs on. In some versions of this idea, the program/computer is a cellular automaton. This doesn’t seem to be a reduction in generality of the idea, but more a notational convenience. Cellular automata are, or can be, universal computers, equivalent to a universal Turing machine. A cellular automaton for the universe would consist of a massive transformation rule that specified how the cells of a digital grid of elemental events changes during one tick of the universal clock. This metarule would generate the next instant in the history of the universe from the previous one by means of rules like those in the Game of Life, where a cell with two or three occupied neighboring cells becomes occupied on the next step, and a cell becomes unoccupied if its neighborhood is less or more dense than this. The universe, under this idea, is strictly digital. There is no physical realization of the mathematical notion of a real number: Only integers describe real things because everything — matter, energy, space, time, consciousness — is ultimately discrete. This idea, as one exponent (Ed Fredkin) puts it, “is an atomic theory carried to a logical extreme where all quantities in nature are finite and discrete. This means that, theoretically, any quantity can be represented exactly by an integer.” Michael is editor-at-large for DDJ. He can be contacted at [email protected]. http://www.ddj.com
Uh-huh. The universe is a computer program, and a digital one at that. Wow, what a radical thing for someone whose academic background is in computer science to say to an audience of programmers. Okay, but some very smart people think there may be something to this idea of the universe as a computation. I’ve probed the views of some exponents of this view here in this column: Stephen Wolfram with his New Kind of Science and Edward Fredkin with his Digital Philosophy. Now add to these Gregory J. Chaitin, inventor of Algorithmic Information Theory. All three theories, as articulated in their books: • A New Kind of Science (Wolfram Media, 2002; ISBN 1579550088. Also now available free online — but huge!— at http://www.wolframscience.com/ nksonline/). • Digital Philosophy (unpublished; 2001 draft at http://www.digitalphilosophy .org/; also see “An Introduction to Digital Philosophy,” International Journal of Theoretical Physics, Vol. 42, No. 2, 2003, pp. 189 –247). • Algorithmic Information Theory (Cambridge University Press, 1987; ISBN 0521343062; but also see his recent ebook Meta Math at http://www.cs .auckland.ac.nz/CDMTCS/chaitin/). pull together mathematics, theoretical physics, and computer science in a new synthesis. But these are not quite mainstream views: For one thing, modern theoretical physics is done using ordinary and Dr. Dobb’s Journal, August 2004
partial differential equations, not integer arithmetic. It would seem either that most modern physicists have got it wrong or that the Digital Philosophers have. I don’t know which is the case, but I know that you get into more interesting questions if you bet on the outsiders. Like Wolfram, Fredkin, and Chaitin. Random Thoughts Gregory Chaitin works at the IBM Watson Research Center in New York. As a teenager, he tried to interest the legendary Kurt G¨odel in his theories. G¨odel was too busy, but the teenager’s ideas were very clever and have proved extremely fruitful. For four decades, Chaitin has been the driving force behind what he calls “Algorithmic Information Theory.” He also came up with one of the most interesting numbers there is — Chaitin’s Omega. But to get to that interesting number, it is first necessary to talk about uninteresting numbers. The concept of an uninteresting number comes up in Berry’s Paradox, which was, paradoxically, not invented by Berry, although G.G. Berry, an Oxford librarian, did suggest the idea to Bertrand Russell, who turned it into a serious topic for mathematical and philosophical inquiry and named it after his librarian friend. Berry’s Paradox is exemplified by the phrase, “The first natural number that cannot be named in fewer than fourteen words.” That phrase contains 13 and purports to name a number that can’t be named in fewer than 14 words, hence, the paradox. You can also ask what is the smallest uninteresting number, and whatever the 73
answer, it will be paradoxical, because the smallest uninteresting number is interesting by virtue of being the smallest uninteresting number. There are various ways to make the concept of uninterestingness concrete enough to make the paradox interesting and the Berry Paradox is one. Chaitin did to the Berry Paradox what G¨odel had famously done to the Liar Paradox in developing his results on the inherent incompleteness of arithmetic and other formal systems. The Liar Paradox involves the statement, “This statement is false.” If it really is false, then what it says is true, and if it’s true, then it’s false. Paradox. G¨odel varied the statement to read, “This statement is unprovable [in a particular formal axiomatic theory, such as arithmetic with addition and multiplication].” He showed that, to avoid the analog of the Liar Paradox, one had to conclude that arithmetic and all sufficiently complex formal axiomatic systems are incomplete: That there are true statements in them that cannot be proved in the systems. Chaitin did the same trick with the paradoxical phrase in the Berry Paradox. He transformed it to “The first natural number that can be proved [in a particular formal axiomatic theory] to have the property that it cannot be specified [on a particular universal Turing machine] by a computer program with fewer than N bits.” That looks considerably different, but it’s the same kind of transformation that G¨odel did on the Liar Paradox statement, and Chaitin gets analogous results. He shows that there is, in fact, a computer program of fewer than N bits that computes the number that this statement says can’t be computed in fewer than N bits, and like G¨odel, he uses the contradiction to prove an incompleteness result. This incompleteness result has many consequences, including the fact that you can never prove that there is no program shorter than a given program that produces the same output. The Perfect Number Say you feed random bits to a Turing machine. That is, you flip a coin or perform some other sequence of actions that produces a truly random sequence of 1s and 0s, and you feed these 1s and 0s to a Turing machine as its data. What is the probability that the Turing machine will eventually halt? The question has, for any given Turing machine, a precise answer, a particular real number between 0 and 1. That number is Chaitin’s Omega. This is one way of defining Chaitin’s Omega. Chaitin’s Omega is not a single number, but rather depends on the specifica74
tion of the Turing machine in the definition above. But for any given Turing machine, Chaitin’s number is a specific positive real number. And it turns out to be a very interesting one. Philosophers, mathematical cranks, and mystics have long sought some wonderful number that would encompass all
Chaitin did to the Berry Paradox what G¨odel had famously done to the Liar Paradox
knowledge. Philosopher Charles Sanders Peirce was obsessed with the number 3. Chaitin’s Omega has more digits than Peirce’s, and it also really does encode a remarkable amount of very precious information. It contains the answers to most of the open questions in mathematics. It has some other interesting properties. It is maximally dense, in a sense that seems (but only seems) to defy the Berry Paradox. And it is random in the strongest possible sense. Chaitin was one of those who came up with that strongest definition of randomness — that something is random only if no shorter description for it exists. Omega is random in that sense. Chaitin’s Omega compactly encodes all solutions to the halting problem. If the first few thousand digits of Omega were known, we would be able to solve most of the open problems in mathematics and theoretical computer science. This would be done (in theory!) by decomposing a sum of probabilities. Since Omega is defined to be the halting probability of a Turing machine with random input, it is really the sum of the halting probabilities of all possible programs. So what you do is, you start feeding programs to the Turing machine on a “Twelve Days of Christmas” schedule: Run the first program one step, then run the second program one step and the first one a second step, then the third program one step and Dr. Dobb’s Journal, August 2004
the second a second step and the first a third step…And so on, and you do a little math on the output. Sounds good to me. I’m ready to start now, but apparently there are practical difficulties. Empirical Mathematics Chaitin’s philosophy of digitalness seems congruent to Wolfram’s and Fredkin’s. One area where it leads him is to the notion of the essentially empirical nature of mathematics. Wolfram effectively credits Chaitin and Kurt G¨odel with the philosophical justification for Wolfram’s entire New Kind of Science research program. He says: The idea that as a matter of principle there should be truths in mathematics that can only be reached by some form of inductive reasoning — like in the natural sciences — was discussed by Kurt G¨odel in the 1940s and Gregory Chaitin in the 1970s. But it received little attention.
Chaitin says that mathematics can be, indeed must sometimes be, an experimental discipline. As in natural science, there are as a matter or principle some mathematical truths that you cannot arrive at through deductive reasoning within an axiomatic system. Here’s how he arrives at that: In Chaitin’s incompleteness results, he doesn’t exactly conclude that you absolutely can’t, for example, prove that a particular program is the shortest possible program that produces its particular output. It’s just that, to prove it, you would need more bits worth of axioms in the system than there are in the result. You’d be putting as much in as you’re getting out. You might as well stipulate the result as an axiom, because you’ll never prove it within the system otherwise, even though you might have excellent reasons, outside this particular formal axiomatic system, to believe it to be true. Chaitin claims that mathematics is full of results that are just true for no good reason, and that the only way to get them into your theories is to add them as axioms. When you come across one, it is a discovery, like the empirical discoveries in other sciences. Mathematics is an empirical science. This ties in nicely with the universe-ascomputation idea. The universe is cranking out tomorrows as fast as it can, and there is no algorithm that can do it faster. So the future is not just unknowable in practical terms, it is inherently unknowable. We’ll know it when it happens, and not before. DDJ http://www.ddj.com
EMBEDDED SPACE
Datasheet Space Ed Nisley
A
glance at a hardware engineer’s bookshelves used to provide valuable status information. If the serried databook ranks showed a date less than a year old, then he (rarely she) was still doing useful work. A three-year lag marked the jump into management hyperspace, with lingering technical knowledge of the current project. If the most recent databook dated back five years, then either a good engineer had undergone a spinal net strip and was now a fully committed manager or he was an old-tech info junkie. Nowadays, you simply cannot tell what’s going on because databooks have largely gone the way of the dodo and there’s no cubicle room for bookshelves, anyway. The same information, digitized and tucked in PDF files on vendor web sites, leaves no traces for hallway sociologists. Whether paper or PDF, databooks provide hardware engineers with a resource largely unavailable to software designers — specific information about how a device will perform under a variety of conditions. Datasheets applied without engineering judgment, however, lead to designs that work only in Datasheet Space: that ideal universe where things behave the way they should. Let’s investigate what a hardware datasheet has to say, what it doesn’t say, and why software must play some serious catchup before it can even think of leaving Datasheet Space. By the Numbers If you’re not familiar with the datasheet genre, the LF411, a classic operational amplifier, will serve as a good introduction. Fetch the complete datasheet from http://www.national .com/ds/LF/LF411.pdf. Because datasheets don’t include pronunciation hints, here’s how I say it: “ell eff four eleven.” A good datasheet begins with a narrative description of the device and an explanation of why it’s better than another device you might also be considering. The LF411 is an operational amplifier, an analog building block that amplifies the voltage difference between its two input terminals, with (for its day, perhaps two decades ago) low bias currents and decent bandwidth. Ed’s an EE, PE, and author in Poughkeepsie, NY. Contact him at ed.nisley@ieee .org with “Dr Dobbs” in the subject to avoid spam filters. http://www.ddj.com
A good datasheet has a bulleted list of key specifications on the first page so that you can quickly decide if this device is the right one for the job. Those specs, however, may be “typical” numbers, not absolute limits, and have suckered generations of newbie engineers into producing some truly marginal designs. I’ll have more to say on that subject a bit later. The same silicon chip may appear in different packages, as shown in Figure 1, and the datasheet maps functions to pins. You can buy an LF411 in a familiar 8-pin DIP (dual-in-line) “caterpillar” package or a hermetically sealed Mil-spec metal can. Other op-amp chips come in 8-pin SOIC (small outline integrated circuit) packages, 5-pin SOT23 (small outline transistor) packages, or even smaller chip-scale packages that resemble pepper flakes. You must pay attention to such details, as I know of one case where the same IC in the same package from two different manufacturers had two different pin assignments. One chip used wire bonding and sat face-up in the package, while the other used flip-chip bonding and faced down, resulting in mirror-imaged pins. Oops! Transistor datasheets show the pinout looking at the bottom of the package and IC datasheets show the pinout looking at the top of the package. I think this dates back to the days when transistors replaced vacuum tubes: You hand-wired tube sockets from below the chassis so tube pinouts always showed the bottom view. When ICs took over digital logic, however, the circuits were debugged from the top of the board. Some datasheets have a 3D perspective drawing that eliminates any possible misunderstanding. Next comes the Absolute Maximum Ratings section, which tells you how much applied voltage and power dissipation the chip can withstand, the maximum (and minimum!) operating temperatures, the chip’s thermal behavior, and the time-andtemperature limits you must observe while soldering it to the board. You’re generally cautioned that these are the maximum survivable values: While the device may not work correctly at the extremes, it won’t be damaged. The DC Electrical Characteristics detail the steady-state parameters you must either supply to or expect from the device. Voltages, currents, gains, temperature coeffiDr. Dobb’s Journal, August 2004
cients, and noise levels appear in this section. This is where you find out just how nonideal the chip will be in your circuit. An ideal operational amplifier, for example, has infinite gain, zero input current, and unlimited output power. Real op amps have very definite restrictions that aren’t the limiting factor for most circuits. Evaluating the specs with respect to your circuit will tell you whether that’s true in your situation. Neglecting that evaluation is the mark of a newbie. Some datasheets have three columns of numbers: the minimum, typical, and maximum values. Key parameters may be measured at three temperatures, producing nine different numbers for what’s overtly the same thing. Your job is to figure out which numbers apply to your situation. For example, the LF411 has an input bias current, the nonideal current flowing through the input pins, of 50 pA. That’s the typical value at 25 degrees Celsius: The mean bias current of a large sample will be about 50 pA. There’s no indication of the standard deviation or the range, though, so you use the typical value at your own risk. If you’re designing a sample-and-hold circuit (for which you wouldn’t use an LF411, but work with me on this), the bias current creates an error voltage by charging the circuit’s input sampling capacitor. The error depends linearly on bias current and inversely with capacitance. The sampling time depends linearly on the capacitance, so the capacitor will be as small as you can get away with, making the op amp’s bias current an absolutely vital specification. The LF411’s maximum input bias current is 200 pA at 25 degrees C, a factor of 4 worse than the typical value, and jumps to 4 nA at 70 degrees C and 50 nA at 125 degrees C. Because the input bias current increases by a factor of 1000 from room temperature to the maximum allowed operating temperature, the holding error for your S&H circuit also increases by a factor of 1000. A typical voltage error of, say, 1 mV balloons to a maximum error of 1 V— enough to invalidate your design. Perhaps you think your circuit runs at room temperature? The datasheet temperatures refer to the chip, not its surroundings, so you must extract the package’s thermal resistance from the Absolute Maximum Ratings section, multiply it by the 75
chip’s expected power dissipation (which depends on the load you’re driving), and add the result to the ambient temperature. Get the picture? You must know how to put all the numbers in the datasheet together to figure out whether the device will work in your circuit. Actually, all the numbers may not be present, in which case you should become somewhat suspicious of the datasheet. The AC Electrical Characteristics section specifies the chip’s frequency limits, speeds, slew rates, and other values that define its response to changing signals. One key value for op amps, the gain-bandwidth product, relates the signal frequency and the overall gain: You can increase one only by decreasing the other. Suppose you must amplify a microphone signal by a factor of 1000, from 1 mV to 1 V. The LF411’s gain-bandwidth product is typically 4 MHz, which limits the maximum bandwidth to 4 kHz=4 MHz/1000. This device won’t emit high-fidelity audio, particularly when you realize the minimum GBW is only 3 MHz. Producing reasonably hi-fi audio requires a 20-kHz bandwidth (or much more if you have Golden Ears), which limits the maximum gain to 150=3 Mhz/20 kHz. You must build up more amplification by cascading several lower-gain stages. The Typical Performance Characteristics section includes graphs that show how key parameters vary with voltage, temperature, load, time, and frequency. These are typical values, not minimums or maximums, but they give you a feel for the changes. For example, knowing that the Power Supply Rejection Ratio drops from 120 dB at 10 Hz to under 40 dB at 1 MHz tells you that you must have a very quiet supply when you’re working with high-frequency signals. Conversely, knowing how much the output impedance increases at those
Figure 1: These ICs from my parts heap contain different chips. The metal can and DIP caterpillar date back to the dawn of ICs and are still available for some chips. The smaller SOIC and minuscule SOT23 packages show up in contemporary gizmos, with even smaller packages used in things like mobile phones. 76
frequencies tells you that the load-driving capacity isn’t what you might think from that first-page bulleted list. The Application Hints section is where the chip’s designers tell us how to actually use the thing and what to watch out for when we do. Very often, a single sentence can reveal a potential oops that appears nowhere else in the datasheet. Consider this: “Exceeding the negative common-mode limit on either input will force the output to a high state, potentially causing a reversal of phase to the output.” When you dig back through the DC Characteristics and the graphs, you find this means that an input voltage within a few volts of the negative supply will jam the output high, even if the differential input voltage calls for a negative output voltage. Oops! This section may also have sample schematics that give a broad-brush overview of its applications, while omitting vital components like transient suppressors and current limiters. You must apply general engineering knowledge to the examples, although I’ve seen them make their way directly into production, with sometimes dismal results. The last page of the datasheet generally sports disclaimers about not using the device in life-support products and high-reliability applications. Most of us aren’t affected by those restrictions, but the very-fine-print section about the lack of patent licenses should get your attention, particularly in this day and age. Now that you’ve seen the National Semiconductor LF411 datasheet, you’ll understand why the Texas Instruments version at http://focus.ti.com/lit/ds/symlink/lf411 .pdf leaves a bit to be desired. The two devices may be essentially identical, but which would you rather use — one with full disclosure or one that tempts you into Datasheet Space? A clue for product managers and documentation writers: More information sells! Software Datasheets? If you’re a software maven and you’re still with me, you should have a bad feeling in the pit of your stomach. Those hardware folks have problems in places where software folks don’t even have places: There is no software equivalent to a hardware datasheet. Yes, we have functional descriptions and high-level specifications and UML and so forth and so on. What’s missing is the range of information in even the simplest semiconductor datasheet: how it behaves under specified conditions, what to expect over a range of conditions, and why you might want to apply it in a particular manner. Software is remarkably fragile in comparison with even the most trivial hardware. Dr. Dobb’s Journal, August 2004
Some years back, Red Hat absorbed an amazing amount of flak for shipping gcc 2.96 rather than, oh, gcc 2.95.3. It seems that a 0.00.7 version increment broke existing code in strange and mysterious ways and produced strong opinions on both sides of the issue. The actual issues are beyond the scope of this column, but suffice it to say, nobody really expected the extent of the problem and none of the affected code came with a compiler caveat. The linkage between even a trivial software module and the system running it is far more complex and subtle than the eight pins of an LF411, so perhaps it can’t be distilled down to a few comprehensible pages. However, we seem so far from being able to characterize software, that we’re not even close to living in Datasheet Space, let alone in the real world. Our view of software is so idealized that the possibility of, say, a buffer overflow isn’t even worth documenting. We ignore error conditions (do you check printf ’s return value?), blow by numerical issues (quick: What’s the difference between affine and projective infinity?), and handwave around security (should we trust electronic voting?). Even if you don’t have those problems, the industry at large sure does! There’s money to be made for putting a dent in this problem, much less solving it. I suspect progress will require a change in attitude, more than a change in practices, but the languages and methods we currently use surely aren’t helping. If this tirade inspires you to crack the code, send me a check every few months, okay? Reentry Checklist Where do you put information you won’t need again? The classic Signetics Write-Only memory datasheet is once again available at http://www.ganssle.com/misc/wom.html. A collection of more practical datasheets lives at http://www.datasheetcatalog.com/, as well as individual manufacturer web sites. Searching for just the part number using Google will generally produce enough hits to keep you occupied for days. Red Hat’s side of the gcc 2.96 issue is at http://www.redhat.com/advice/speaks _ gcc.html, with another view at http://gcc .gnu.org/gcc-2.96.html. It’s old history by now, but remember the lesson for your own code. If you’re interested in this electronics stuff and want to learn more, get a copy of Paul Horowitz and Winfield Hill’s The Art of Electronics (Cambridge University Press, 1989; ISBN 0-521-37095-7). Even if analog isn’t your main gig, you’ll know more than most hardware engineers by the time you work your way through the commentary and examples. Highly recommended! DDJ http://www.ddj.com
CHAOS MANOR
WinHEC: The Future of Windows Jerry Pournelle
T
his spring, I spent a week in Seattle at WinHEC, the annual Windows Hardware Engineering Conference and, as usual, I was on information overload well before the conference ended. There’s a lot happening in our industry; some of what we will see this year, and a lot in two years. It has become clear enough: The Pentium 4 architecture has just about reached its limits. There will be some marginally faster Prescott and possibly even Northwood chips coming, but they won’t go very far. Peter Glaskowsky notes that Intel has been stuck near 3 GHz since the end of 2002. “We’re one full Moore’s Law generation past the 3.06 GHz part’s introduction, and the Pentium 4 is only 10 percent faster.” Intel has achieved good yields on 3.2-GHz chips, but not on 3.4, and the faster they shoot for, the worse the yield. Now Intel always has the option to make a lot of Pentium 4s and cherry pick the very fastest, but it’s an expensive way to stay in the game against AMD; and AMD 64-bit systems with nVIDIA support just keep getting better and better. The bottom line is that Intel decided to bite the bullet. Tejas, the successor to Prescott, is gone. It seems clear to me Prescott won’t be ramped up much, and the Pentium 4 architecture is essentially as good as it is going to get. We are at the end of an era. Understand, what we have is pretty good. I’m running both Prescott and Northwood 3.2-GHz Intel systems, and they are not only faster than any software I want to run with them, they’re faster than Jerry is a science-fiction writer and senior contributing editor to BYTE.com. You can contact him at [email protected]. http://www.ddj.com
any software I know about (except for games). The people who think of the best Northwood and the early Prescott systems as “limiting” are generally the people who build extreme game machines. But gamers don’t want Northwood or Prescott anyway: They want Athlon systems. One reason came out at WinHEC: 32-bit applications run faster on 64-bit systems. All this is great for AMD sales — and I can say that my next extreme game system will be AMD based. With the cancellation of Tejas, for the first time Intel has nothing with which to compete with AMD at the high end, and that situation will continue through this year and perhaps beyond. It’s unclear when the first of the new dual-core replacements for Pentium 4 will come out, but that’s probably not much before mid 2005. Naturally Intel will hurry as fast as it can. I don’t expect dual-core server parts before 2006, leaving Intel with nothing to pitch against AMD’s Opteron.
Pentium M architecture is derived from the Intel Mobile Pentium chips. Those are single core, but the new desktop chips incorporate two cores in each chip. Note that software sees single-core systems with Hyperthreading Technology as systems with dual processors, and therefore the Pentium M dual-core systems will run that software without problems. The latest Intel move doesn’t abandon the considerable effort put into writing applications to run faster with Hyperthreading Technology, and this is where Intel hopes to stay competitive in the very high-end gaming systems world. Dual-core Pentium M systems should rock. But we won’t have them for a year or more. Meanwhile, AMD keeps rolling along, bringing out better and better 64-bit systems while their partners at nVIDIA improve both the graphics and the support and glue chips to make AMD motherboards. Almost as an afterthought, as WinHEC 2004 closed, AMD also announced 64-bit mobile CPUs.
Two Cores Are Better Than One The new processors will introduce a new technology to desktop systems. There will be two cores to each chip; in essence, each CPU will be a dual-processor system, with the processors close enough together to take full advantage of the duality. Intel believes this approach will deliver extreme performance while keeping heat dissipation under control, and early experiments have been very encouraging. Pentium M systems are said to be almost as fast as current Northwood and Prescott, but with less heat generation. Intel never comments on unreleased parts, so this is all speculation, but we can wish them well at these efforts.
The 64-Bit World to Come Everywhere we looked at WinHEC we saw 64-bits. It didn’t have to be this way, but today’s 32-bit software runs somewhat faster on today’s 64-bit systems, and that’s without any special effort to make it happen. Part of this is due to increased efficiency in the operating system overhead, of course; but the bottom line is that if you’re in need of more speed, 64-bits looks to be the way to go. Buried deep in some of the WinHEC presentations was a surprise: Microsoft Windows 64-bit Windows XP won’t run 16-bit software. Moreover, it’s not at all clear that Longhorn will do it either. Brian Marr in a business strategy session flat
Dr. Dobb’s Journal, August 2004
77
out said that 16-bit Windows is going away, while Samer Arafeh, another software design engineer for the Windows base OS kernel, had a slide proclaiming “No 16-bit support on 64-bit Windows. Win32 API’s for 16-bit support are stubbed out to fail.” This is a huge change for Microsoft. Only a few months ago at the Microsoft Professional Developers Conference (PDC 2003), Bill Gates himself said that “All old applications will continue to run” in 64bit Windows. The tricky part is that the change is in the WinHEC briefing slides, but it wasn’t emphasized, and was never mentioned in the keynotes at all. Note that in practice Microsoft won’t entirely abandon 16-bit legacy applications, but the support will be much the
78
same as the legacy support for older DOS software in Windows XP: virtual servers, and virtual machines to run the older applications. Having just struggled to get Railroad Tycoon, an older DOS game, running on a Windows XP machine I know more about Virtual DOS and DOSBOX than I really wanted to; but in fact they do work, and you can get the older DOS stuff running if you really want to, and I don’t expect it to be more difficult for those who need 16-bit applications to run on a 64-bit system. Once set up, users should neither know nor care that their 16-bit applications are running in an emulator; they should still Just Work. The intriguing question, and one not answered at WinHEC, is whether Longhorn (the next generation Windows) will have
Dr. Dobb’s Journal, August 2004
direct provisions for 16-bit applications? It’s not clear to me that there will be a 32bit version of Longhorn at all. I have seen neither confirmation nor denial. I do know Microsoft would prefer not to release any more versions of Longhorn than needed, and if there were no 32-bit version of Longhorn it would simplify matters considerably. On the other hand, new 32-bit computers are very fast and there will be a lot of 32-bit machines in use when Longhorn comes out; and Microsoft seldom, if ever, abandons a large potential market. If I had to guess, I would say there will be both 32- and 64-bit versions of Longhorn, with a decided shift of resources over to the 64-bit side. Aero: The New Look of Windows First, some terms: “Aero” is the user interface (UI) for Windows Longhorn, and is built atop “Avalon,” Longhorn’s presentation layer. Avalon, “WinFS” (the stillupcoming Windows future file system) and “Indigo” (the communications layer, for talking to everything from Web to COM ports), make up “WinFX,” the “programming model for Longhorn.” While it’s still very much under development, Aero is now far more precisely defined than at WinHEC 2003, or even at November 2003’s Microsoft Professional Developer’s Conference (PDC). The developers have struggled mightily to sketch in the missing details of what Longhorn will look like, how developers will interface to it, and the associated hardware requirements. This isn’t being done in a vacuum: At both this WinHEC and last year’s, every graphics presenter asked for feedback on what developers would like to see, justified the decisions made so far, and went out of his or her way to answer questions. This is important, because Longhorn will be the “biggest architectural advancement since Windows 95, at least for graphics, text, and media,” as Greg Schechter, Microsoft’s software architect for the Windows Client Platform said in his Avalon Graphics Stack overview. Among the biggest changes: GDI/GDI+, the Graphics Display Interface and current main method for drawing windows on the screen, is going to become a second-class citizen. While all existing GDI calls will continue to work for existing applications, the entire OS (and presumably future apps) will use Avalon for rendering. The minimum graphics card requirements for Longhorn will be a DirectX 9 card with Shading Model 2. We will follow this up next month. Another: The GPU, the Graphics Processing Unit, will be used to accelerate the rendering in Windows. In the past, almost no Windows display calls were http://www.ddj.com
accelerated using the graphics chip; all that was done in CPU and main memory. Currently, when an application moves over another, Windows tells each one to redraw. That’s reasonably fast on modern machines, but is inelegant at best. Redraw-on-command causes the “screen droppings” problem, where moving one window over another writes copies of the upper window on the lower one. XP and earlier versions of Windows use little of the available video memory; when you scroll a Word document, the scrolled text loads from main memory into video memory one chunk at a time. Longhorn will use video memory to cache scrolling regions, and generally make better use of it. Think of how much faster PowerPoint could scroll through a long presentation if the next slide was rendered into video memory before you got there. As an example, David Brown, the development lead for the Windows Text Team, demonstrated preview-scrolling through a long formatted document; as he did, a ghostly copy of the page you were passing by appeared next to the scroll box. He confirmed that this really cool feature was essentially “free,” easily accessed by applications using the WinFX APIs. Better, it was a live preview, updating as the application did; if there was an animation in the page you scrolled by, a tiny live animation would play on the preview. If this feature works as well as his demonstration, it will be an excellent navigation tool. David Brown is at heart a typography nerd, and I use that term as a compliment. At the last two WinHECs, he’s covered how Longhorn will handle text display in exotic character sets, such as Thai and Traditional Chinese. At PDC 2003 his team showed examples of properly rendered drop caps, and of a lead capital “L” whose base extended under the next two letters — both situations required hand-coded text rendering in previous Windows. At WinHEC 2004 he talked about how Longhorn will use high-quality text just about everywhere, using subpixel rendering and some other smart tricks. Longhorn will properly display high-quality antialiased text over video right in the operating system, which wasn’t possible in the past. This is going to put demands on the GPU like never before. Because the GPU is going to be tied so tightly to rendering, the Longhorn Display Driver Model (LDDM) will require (among other things) access to soft and hard reset of the GPU, so if the video card gets into an unusual state, Longhorn can klonk it on the head. That’s hardly the only change; the LDDM is radically different from the Windows XP driver model, which in turn is quite different from past ones. From the inquiries http://www.ddj.com
and comments of the display- driver mavens in the audience, the industry is still trying to absorb all of this. True, ATI and nVIDIA have had staff working in Redmond for several years, but video driver writers (admittedly a relatively small community) are going to have some studying to do.
Pentium M architecture is derived from the Intel Mobile Pentium chips Other changes: Screen captures have long been difficult to implement, requiring the screen-capture software to fake out Windows. Longhorn will have a simpler interface for grabbing what’s on screen. Yet another: Make available all the features of the host PC no matter how accessed. This may be directly, through a linked TabletPC (a method that appeared so often, I suspect that Microsoft is very fond of it), and the main method, remote terminal sessions. Another big change: LDDM will only allow a single display driver at a time. You can still have multiple display adapters, but they’ll have to share a single driver — effectively meaning they’ll have to be from the same manufacturer. That seems a reasonable restriction. More Longhorn Last year, Jim Allchin announced some early Longhorn features and code bits for testing. There was more of that this year, but how much of Longhorn is running in test models and how much is promises and slideware isn’t clear. For example, Windows Future Storage (WinFS Storage) was discussed, but I didn’t see any demonstrations of it. You can find out about as much as has been released on the Microsoft site (http://msdn.microsoft .com/Longhorn/understanding/pillars/ WinFS/) but, aside from the fact that the file storage format will include a full relational database, not much else is known. Clearly, it’s time for a new file storage system. Obviously, the original DOS File Allocation Tables (FAT) never contemplated thousands of files some a gigabyte and larger. NTFS did, but finding things in all that clutter is nearly impossible. Dr. Dobb’s Journal, August 2004
WinFS Storage tries to deal with that: Allchin wanted us to be impressed by its relational filing-system capabilities. With WinFS Storage you can, in theory at least, find all files related to each other no matter what their format or where they are stored. This reminds me of Traveling Software’s ill-fated Papillion relational filing system, which was a great idea but before its time: The hardware just wasn’t up to the job and, of course, it had to layer over FAT; this time, the relational elements are to be built into the File System itself. We were given a demonstration of some of its capabilities: a lawyer building a case file. It was really impressive, but it was a demo and I have no idea how much was hard-coded concept illustration and how much was actual working code. If WinFS works the way Allchin seems to think it will, I can hardly wait. I find I wrote the word “INCREDIBLE” four times on my Tablet. Long time readers may recall “Cairo,” an earlier Microsoft attempt at building a relational filesystem. That never shipped, probably due to hardware limits as much as anything else. Microsoft seems to have higher hopes for WinFS; but in any case, WinFS must work perfectly, first time, every time, or it will be a huge liability. Microsoft can ship Longhorn without it, and if they have to, probably will. While they were careful never to mention an actual date, the unspoken promise was about WinHEC 2006; in any case, about two years from now. That will have been five years since Windows XP. On to Cairo! Winding Down The game of the month remains Star Wars Galaxies (SWG), the online massively multiplayer roleplaying game based on the movie series. I suppose at some point I’ll go back to Dark Age of Camelot, or even Everquest. I’ve never given up my accounts. But I have managed to get Ventrilo (http://www.ventrilo.com/) working on SWG and that adds a lot. Ventrilo is a voice communication system that works shamelessly with SWG, and has a lot of nifty features like button mapping. I’ve mapped the center wheel button (never used in SWG) to be the push-to-talk switch. I’m using it with a Plantronics behind-the-head headset, and it all Just Works. It’s a lot of fun to hunt huge beasts with a team of comrades you can talk to. The computer book of the month is Chris Hurley et. al., WarDriving: Drive, Detect, Defend, A Guide to Wireless Security, (Syngress, 2004; ISBN 1931836035). The title says about all you need to know. If you want to try some WarDriving, you need this book. And if you’re concerned about being a victim, you certainly need it. DDJ 79
PROGRAMMER’S BOOKSHELF
The OpenGL Shading Language Paul Martz
I
n the 1970s, vector-based graphics gave way to raster graphics, and several raster algorithms demonstrated the new technology’s ability to produce breathtaking images. Unfortunately, accelerating these algorithms in hardware proved difficult and costly. Recently, programmable graphics hardware capable of rendering such algorithms in real time has become inexpensive and widely available. The result is a handful of proprietary shading languages created and proposed as standards for this new industry. OpenGL Shading Language, by Randi Rost, describes the OpenGL Shading Language — the first shading language designed as a cross-platform open standard by a group of graphics hardware and software vendors. Rost is a veteran of the computer graphics industry. He started programming graphics on an Apple II in the late 1970s and was working on programmable graphics hardware as early as 1983, when programmable graphics hardware meant little more than a framebuffer with a microcode interface. Graphics hardware has advanced dramatically since then and continues to advance rapidly today. Most modern 3D hardware supports some type of programmable interface, and should support the Architectural Review Board (ARB)approved OpenGL Shading Language in the near future. The first chapter is a whirlwind overview of OpenGL. You might be tempted to skip this chapter. But before you do, consider that Rost is one of only a few who have contributed to every major revision of OpenGL. If you’re a beginner or intermediate OpenGL programmer, you’ll certainly learn something in this brief review. Chapters 2 through 7 introduce you to the OpenGL Shading Language, covering topics such as language semantics and OpenGL entry points for specifying shaders. Paul is a software engineer at SimAuthor Inc. in Boulder, Colorado, specializing in 3D computer graphics and the OpenGL Standard. He can be contacted at pmartz@ frii.com. 80
OpenGL Shading Language Randi Rost Addison-Wesley, 2004 608 pp., $59.99 ISBN 0321197895 Chapter 8 discusses shader development and performance issues. As you might expect from a book on a shading language, much of the performance discussion concerns shaving cycles from vertex- and pixel-shaders. The information is practical and not obvious even to intermediate programmers; for example, using min( ) or max( ) instead of clamp( ) when you know the variable will only exceed one end of a range. However, I found little discussion on how developers might determine which stage of the rendering pipeline is the performance bottleneck. Since this subject is considered black magic by many young and enthusiastic graphics developers, Rost could have added value to his book with a short section on this subject. Chapter 9 provides shader listings for implementing core OpenGL functionality. The OpenGL Specification is the ultimate definition of OpenGL internal functionality, but the spec is mostly text and formulas, with only a few code listings. These well-written, concise, and efficient examples of shader code are both illuminating and instructive. Chapters 10 through 16 provide computer graphics developers with real-time working OpenGL Shading Language source code for implementing several major computer-graphics algorithms and techniques. Topic areas include lighting, Phong shading, texture mapping, bump mapping, multitexturing, procedural texture mapping, lattice shaders, noise, turbulence, shadows, animation, particle systems, antialiasing, hatching and other nonphotorealistic techniques, vertex and image blending, image convolution, and more. These examples demonstrate the range of OpenGL Shading Language applications, and give you a basis for writing new shaders. Rost’s explanations of the algoDr. Dobb’s Journal, August 2004
rithms are easy to read and comprehend, and demonstrate the depth and breadth of knowledge he has accumulated during his 25-year career in graphics. Conspicuously missing is any mention of global-illumination algorithms such as ray tracing and radiosity. Such scenebased algorithms present obvious challenges to vertex- and pixel-based shading languages. Rost confessed to having a rough ray-tracing demo that was not ready for publication when this book went to press. He expressed optimism about the OpenGL Shading Language’s ability to accelerate programs of this type: “In future revisions of hardware, we’ll be able to implement more interesting algorithms [than currently appear in this book].” While reading OpenGL Shading Language, I often found myself noting similarities and differences between the OpenGL Shading Language and interfaces to other programmable graphics hardware I’ve used. The book’s final chapter covers this topic by comparing and contrasting the OpenGL Shading Language to current commercial shading languages such as RenderMan, ISL, HLSL, and Cg. Two appendices serve as useful reference material. Appendix A covers OpenGL Shading Language grammar, and Appendix B documents OpenGL entry points for creating and managing shaders. In general, the computer industry often provides two solutions — one proprietary and the other an open standard. As the only open standard shading language available that is designed for modern graphics hardware, the OpenGL Shading Language is certain to be around for several years to come. OpenGL Shading Language stands on its own as both a programming guide and reference manual for this significant new industry standard. However, this book goes further by providing real-time examples of classic computer graphics techniques. OpenGL Shading Language is a must-have algorithm book that should be on every computer graphics developer’s bookshelf. DDJ http://www.ddj.com
OF INTEREST able with source code and comes with example programs, including Microsoft Foundation Class (MFC) and Borland C++ Builder (BCB) examples. MarshallSoft Computing Inc. POB 4543 Huntsville, AL 35815 256-881-4630 http://www.marshallsoft.com/
Visual Numerics is offering the IMSL Thread Safe Fortran Numerical Library for application development in high-performance computing organizations. Complete thread safety lets you call the same routine multiple times in the program and have multiple instances of the routine running on multiple threads, improving performance and simplifying programming. The IMSL Thread Safe Fortran Numerical Library is currently available on Sun Solaris (32 and 64 bit) and IBM AIX (64 bit). Visual Numerics Inc. 12657 Alcosta Boulevard, Suite 450 San Ramon, CA 94583 925-415-8300 http://www.vni.com/ Innovartis has launched DB Ghost 3.0, a database change-management system for SQL Server users. The DB Ghost process involves scripting out your entire production database’s schema and static data into individual scripts, using the DB Ghost Scripter, and adding these into your sourcemanagement system. From that point on, to make a change, you check out the relevant script, make the amendments, execute it against your development database, do unit testing/bug fixing, and check it back in when you’re finished. DB Ghost then verifies that all the scripts compile with no dependency errors, and makes the necessary changes to the target database to ensure that it matches the scripts. Innovartis Ltd 1 Myrtle Road Hampton, London TW12 1QE U.K. http://www.innovartis.co.uk/ + 44 207 917 6264 MarshallSoft’s FTP Client Engine for C/C++ (FCE4C) is a library of functions providing direct control of the FTP protocol. FCE4C contains both WIN16 and WIN32 DLLs and can be used with any application capable of calling the Windows API. The newly released FCE4C 2.4 is availhttp://www.ddj.com
The new LynuxWorks Eclipse-based IDE is a Linux and Solaris-based development environment, powered by the Eclipse platform, that gives LynuxWorks’ BlueCat Linux developers control over creating, editing, compiling, managing, and debugging C/C++ and Java embedded and real-time applications. You also benefit from a target connection wizard to simplify target setup, an application wizard to help start development, and a cross-process viewer, enabling you to view all the processes and threads on a specific target. LynuxWorks 855 Embedded Way San Jose, CA 95138-1018 408-979-3900 http://www.lynuxworks.com/ SmartKernel from Aonix is a memory- and time-partitioned kernel designed to provide safety and security protection. Aonix is targeting language-specific configurations of SmartKernel so that you can create systems where multiple applications in multiple languages can safely run on a single board. Newly released is SmartKernel-Ada95/Embedded, offering support for the full Ada95 language on the PowerPC. As the first implementation of a language-specific implementation using SmartKernel, this release runs as a single SmartKernel partition. Aonix North America Inc. 5040 Shoreham Place, Suite 100 San Diego, CA 92122 858-457-2700 http://www.aonix.com/ N8 Systems has debuted N8 Archetype, which lets business analysts describe requirements in English text and see them represented as UML use case and activity diagrams. Provided as a subscriptionbased online service, Archetype offers a dual-view interface that uses Microsoft’s Word and Visio. Text in Word describing business processes is transformed into Visio diagrams of an existing or desired system, filtering out ambiguities in the process. N8 Systems 918 Parker Street, Suite A-11 Berkeley, CA 94710 510-841-8833 http://www.n8systems.com/ Dr. Dobb’s Journal, August 2004
SensAble Technologies has announced its 3D Touch SDK and Haptic Device API (HDAPI). This SDK supports the SensAble PHANTOM devices and enables the integration of haptics with existing thirdparty applications. The HDAPI enables haptic programmers to render forces directly, offers control over configuring the runtime behavior of the drivers, and provides utility features and debugging aids. SensAble Technologies Inc. 15 Constitution Way Woburn, MA 01801 781-937-8315 http://www.sensable.com/ Xoetrope has released XUI 1.0.3, an opensource Java and XML framework for creating rich client applications. The framework supports rich data binding features, and complete applications can also be built with XUI using XML with application logic written in Java (no JavaScript). The framework includes a graphical IDE. The XUI framework is available at http://xui .sourceforge.net/, and licensed under the Artistic License. Xoetrope 13 Landscape Crescent Churchtown, Dublin 12 Ireland +353 1 2967702 http://www.xoetrope.com/ dtrt.NavBarWin is a RAD control for .NET developers that lets users navigate in a data collection, either by clicking on buttons, using shortcut keys, entering navigation commands in the display window or by selecting commands from the context menu. For additional purposes, developerdefined buttons can be activated. The control is built to work with datasets, views, collections, and every object that implements the IList interface, and can be used with all the VS.NET compatible languages. dtrt AG Itenhardstrasse 48 P.O. Box 750 5620 Bremgarten Switzerland + 41 76 467 2936 http://www.dtrt.com/ DDJ Dr. Dobb’s Software Tools Newsletter What’s the fastest way of keeping up with new developer products and version updates? Dr. Dobb’s Software Tools e-mail newsletter, delivered once a month to your mailbox. This unique newsletter keeps you up-to-date on the latest in SDKs, libraries, components, compilers, and the like. To sign up now for this free service, go to http://www.ddj.com/maillists/.
83
SWAINE’S FLAMES
Razing the Bar
I
t had been over a year since I’d stepped inside Foo Bar, the late-night hangout of computer-industry executives, programmers, and journalists out on the edge of Silicon Valley where I occasionally work as relief bartender to listen in on industry gossip. I’d been too busy to moonlight for a while and wasn’t quite ready to pick up a bar rag just yet. But I thought it would be fun to see how it felt to be on the other side of the bar. So I dropped in, cleverly disguised as a customer. I was totally unprepared for what I encountered. My first thought was, “My gosh, somebody cleaned the place.” It was not a welcome thought. The room was, in fact, a lot cleaner but it went beyond that, I realized, taking stock. The floor was no longer dark, creaky planking with crunchy peanut shells. It was quiet beige linoleum. The book-swap bin that Bernie the bartender had installed by the front door, with the hand-lettered sign, “Barnegat Box”— gone. The lighting— I couldn’t remember what the lighting had been before, but I knew I couldn’t see to the back of the room from the front door, and now I could. The booths in the back had been replaced by round tables with white tablecloths. There were initials carved in those booths that dated back to the days of Shockley Semiconductor. And where were the regulars? These new people sipping flavored margaritas at the round tables had PR written all over them. I grilled Bernie over a bowl of chili. Sorry. I mean, I ordered a bowl of chili and asked Bernie what had happened. “Capitalism happened,” said Bernie, who has some romantic ideas about the redistribution of wealth, which he applies after hours to supplement his bartending income, although I’m not supposed to talk about that. He put “Nighthawks at the Diner” in the CD player on the corner of the bar —Foo Bar has never had a jukebox and still didn’t — and explained that the conglomerate that owned Foo Bar had bought a small trucking firm just before gas prices went through the roof and had picked up a lot of Martha Stewart Living stock just before she was branched out into indictable activities and had bought a bunch of Clie handhelds just before Sony terminated the line and they were a little strapped for cash. “Jeez, did they also lend their car keys to Darrell Issa?” “They aren’t brilliant businessmen, I guess, but they always let me run this place without interference. But now they’ve sold it.” “And the new owner did this? Who is the new owner?” “Now there you raise an interesting question,” Bernie answered, if you call that an answer. Long story short, he didn’t know. Apparently, nobody knew. Bills got paid and paychecks got issued and the renovation got done, but it all seemed to happen through a string of ephemeral intermediaries, and the checks bore various familiar-sounding but actually unfamiliar corporate names. Lacking any answers, we drifted on to other topics. “Think these self-destructing DVDs, like Flexplay brought out last year and this French company is flogging this year, are going to catch on?” I asked. “Probably. It’s a meme. You’ve got term limits for politicians, terminator seeds that won’t reproduce. It’s not a question of whether it’s a good idea, it’s a question of whether it has a good beat and people can dance to it.” “So you give it a ten?” Just then a Take-Charge Guy rose from an animated discussion at one of the round tables and strode purposefully toward us. “Say,” he said, “you were playing that morose tripe last night, too. I brought you something just a little more upbeat.” He pulled a CD out of his sportscoat pocket. “Please put this on.” Bernie silently accepted the proffered CD and replaced Tom Waits with Barry Manilow. The customer is always right. “This is just unacceptable, Bernie,” I snarled. “We’ve got to do something.” He got that look in his eyes that he gets when he’s about to redistribute some wealth. “I think you’re right. Let me see what I can turn up.” …to be continued…
Michael Swaine editor-at-large [email protected] 84
Dr. Dobb’s Journal, August 2004
http://www.ddj.com