This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
To Heather, Wendy, Denny, Leilani, Jesse and Anne, whose love and friendship give me the endless source of energy and happiness. Hung Q. Nguyen
To Victoria, for all the advice, help, support, and love she has given me. Bob Johnson
To Ron, from whom I have stolen much time to make this book happen. Thank you for your love and support. Michael Hackett
201006 FM.qxd
6/5/03
11:14 AM
Page iv
201006 FM.qxd
6/5/03
11:14 AM
Page v
Contents
Preface
xxi
Foreword
xxiii
Acknowledgments
xxv
About the Authors
xxvii
Part One
Introduction
Chapter 1
Welcome to Web Testing Why Read This Chapter? Introduction The Evolution of Software Testing The Gray-Box Testing Approach Real-World Software Testing Themes of This Book What’s New in the Second Edition New Contents and Significant Updates What Remains from the First Edition
Chapter 2
1 3 3 4 4 7 9 10 12 12 13
Web Testing versus Traditional Testing Why Read This Chapter? Introduction The Application Model Hardware and Software Differences The Differences between Web and Traditional Client-Server Systems
15 15 16 16 20
Client-Side Applications Event Handling Application Instance and Windows Handling UI Controls
22 23 26 28
22
v
201006 FM.qxd
vi
6/5/03
11:14 AM
Page vi
Contents Web Systems Hardware Mix Software Mix Server-Based Applications Distributed Server Configurations The Network
28 30 30 31 32 33
Bug Inheritance Back-End Data Accessing Thin-Client versus Thick-Client Processing Interoperability Issues Testing Considerations Bibliography
33 34 35 36 37 38
Part Two
Methodology and Technology
39
Chapter 3
Software Testing Basics Why Read This Chapter? Introduction Basic Planning and Documentation Common Terminology and Concepts
41 41 42 42 43
Test Conditions Static Operating Environments Dynamic Operating Environments Test Types Acceptance Testing Feature-Level Testing Phases of Development
Test-Case Development Equivalence Class Partitioning and Boundary Condition Analysis State Transition Use Cases Example Test Cases from Use Cases Test Cases Built from Use Cases Templates for Use-Case Diagram, Text, and Test Case Condition Combination The Combinatorial Method
Chapter 4
43 43 44 46 46 50 58
60 60 63 66 68 71 75 75 78
Bibliography
80
Networking Basics Why Read This Chapter? Introduction The Basics
81 81 82 82
The Networks The Internet Local Area Networks (LANs) Wide Area Networks (WANs) Connecting Networks Connectivity Services
82 83 84 85 86 86
201006 FM.qxd
6/5/03
11:14 AM
Page vii
Contents Direct Connection Other Network Connectivity Devices TCP/IP Protocols The TCP/IP Architecture Testing Scenarios Connection Type Testing Connectivity Device Testing
Other Useful Information IP Addresses and DNS IP Address Network Classes Domain Name System (DNS) Subnet Subnet Masks Custom Subnets A Testing Example Host Name and IP Resolution Tests
Chapter 5
86 88 89 90 93 94 97
99 99 100 100 101 103 105 106 106 106
Testing Considerations Bibliography
108 110
Web Application Components Why Read This Chapter? Introduction Overview
111 111 112 112
Distributed Application Architecture Traditional Client-Server Systems Thin- versus Thick-Client Systems Web-Based Client-Server Systems Software Components Operating Systems Application Service Components Third-Party Components Integrated Application Components Dynamic Link Library (DLL) Potential DLL-Related Errors Scripts
113 113 113 114 116 117 117 119 119 119 122 123
Web Application Component Architecture
123
Server-Side Components Core Application Service Components Markup Language Pages XML with SOAP Web-to-Database Connectivity Other Application Service Components Client-Side Components Web Browsers Add-on/Plug-in Components
Testing Discussion Test-Case Design Analysis Test Partitioning
123 124 125 125 125 128 130 130 131
133 134 138
vii
201006 FM.qxd
viii
6/5/03
11:14 AM
Page viii
Contents Testing Considerations DLL Testing Issues Script Testing Issues Characteristics of a Script Use of Scripts in Web Applications Testing Scripts in Web Applications Coding-Related Problems Script Configuration Testing
Chapter 6
142 143 143 144 145 145 147
Bibliography
147
Mobile Web Application Platform Why Read This Chapter? Introduction What Is a Mobile Web Application? Various Types of Mobile Web Client
149 149 150 150 151
Palm-Sized PDA Devices Data Synchronizing Web Connectivity Various Types of Palm-Sized PDA Devices Handheld PCs
WAP-Based Phones i-Mode Devices Smart Phones or Mobile Phone/PDA Combos
Mobile Web Application Platform Test Planning Issues Microbrowsers Web Clipping Application: How Does It Work? Handheld Device Hardware Restrictions Software-Related Issues Wireless Network Issues Wireless Network Standards Wireless Modem Wireless LAN and Bluetooth Other Software Development Platforms and Support Infrastructures
The Device Technology Converging Game: Who Is the Winner? Bibliography and Additional Resources Bibliography Additional Resources
Chapter 7
141
151 152 152 153 154
155 157 157
159 159 161 163 164 166 166 170 170 171
172 172 172 173
Test Planning Fundamentals Why Read This Chapter? Introduction Test Plans
Contents LogiGear One-Page Test Plan Developing a One-Page Test Plan Step 1: Test Task Definition Step 2: Task Completion Time Step 3: Placing the Test Task into Context Step 4: Table Completion Step 5: Resource Estimation Using the LogiGear One-Page Test Plan
Testing Considerations Issue Reports Weekly Status Reports Automated Testing Milestone Criteria and Milestone Test
Chapter 8
185 185 185 186 186 186 187
188 188 190 191 192
Bibliography
192
Sample Application Why Read This Chapter? Introduction Application Description Technical Overview System Requirements Functionality of the Sample Application
193 193 194 194 195 196 196
Installing the Sample Application Getting Started Division Databases Importing Report Data System Setup Project Setup E-Mail Notification Submitting Defect Reports Generating Metrics Documentation
Chapter 9
184
197 197 197 197 198 198 198 198 199 200
Bibliography
201
Sample Test Plan Why Read This Chapter? Introduction Gathering Information
203 203 204 204
Step 1: Testing-Task Definitions for the Sample Application Step 2: Task Completion Time Step 3: Placing Test Tasks into the Project Plan Step 4: Calculate Hours and Resource Estimates
Sample One-Page Test Plan Bibliography
205 205 209 210
210 212
ix
201006 FM.qxd
x
6/5/03
11:14 AM
Page x
Contents
Part Three Testing Practice
213
Chapter 10 User Interface Tests Why Read This Chapter? Introduction User Interface Design Testing
215 215 216 216
Profiling the Target User Computer Experience Web Experience Domain Knowledge Application-Specific Experience Considering the Design Design Approach User Interaction (Data Input) Data Presentation (Data Output)
217 217 218 218 218 220 221 225 240
User Interface Implementation Testing
243
Miscellaneous User Interface Elements Display Compatibility Matrix
Usability and Accessibility Testing Accessibility Testing
Chapter 11 Functional Tests Why Read This Chapter? Introduction An Example of Cataloging Features in Preparation for Functional Tests Testing the Sample Application
Testing Methods Functional Acceptance Simple Tests Task-Oriented Functional Tests Forced-Error Tests Boundary Condition Tests and Equivalent Class Analysis Exploratory Testing Software Attacks Which Method Is It?
Bibliography Chapter 12 Server-Side Testing Why Read This Chapter? Introduction Common Server-Side Testing Issues Connectivity Issues Time-Out Issues Maintaining State
Server Side Testing Tips Using Log Files Using Monitoring Tools Creating Test Interfaces or Test Drivers The Testing Environment Working with Live Systems Resetting the Server Using Scripts in Server-Side Testing
Bibliography Additional Resources Testing Tools for Run-Time Testing
Chapter 13 Using Scripts to Test Why Read This Chapter? Introduction Batch or Shell Commands Batch Files and Shell Scripts
Scripting Languages Why Not Just Use a Compiled Program Language? What Should You Script?
Application of Scripting to Testing Tasks System Administration: Automating Tasks Discovering Information about the System Testing the Server Directly: Making Server-Side Requests Working with the Application Independent of the UI Examining Data: Log Files and Reports Using Scripts to Understand Test Results Using Scripts to Improve Productivity A Script to Test Many Files A Set of Scripts That Run Many Times Executing Tests That Cannot Be Run Manually
Scripting Project Good Practice Scripting Good Practice Resource Lists General Resources for Learning More about Scripting Windows Script Host (WSH) Batch and Shell Perl Tcl AWK Learn SQL Where to Find Tools and Download Scripts
Bibliography and Useful Reading
274 275 276 277
281 281 284 289 291 292 292 293
294 294 295
297 297 298 298 301
302 302 303
303 303 304 305 306 307 308 309 309 310 311
311 312 313 313 313 314 314 315 315 315 316
316
xi
201006 FM.qxd
xii
6/5/03
11:14 AM
Page xii
Contents Chapter 14 Database Tests Why Read This Chapter? Introduction Relational Database Servers Structured Query Language Database Producers and Standards Database Extensions Example of SQL
Client/SQL Interfacing Microsoft Approach to CLI Java Approach to CLI
Testing Methods Common Types of Errors to Look For Database Stored Procedures and Triggers White-Box Methods Code Walk-through Redundancy Coding Error Example Inefficiency Coding Error Example Executing the SQL Statements One at a Time Executing the Stored Procedures One at a Time Testing Triggers External Interfacing Black-Box Methods Designing Test Cases Testing for Transaction Logic Testing for Concurrency Issues Preparation for Database Testing Setup/Installation Issues Testing with a Clean Database
Database Testing Considerations Bibliography and Additional Resources Bibliography Additional Resources
Chapter 15 Help Tests Why Read This Chapter? Introduction Help System Analysis Types of Help Systems Application Help Systems Reference Help Systems Tutorial Help Systems Sales and Marketing Help Systems Evaluating the Target User Evaluating the Design Approach Evaluating the Technologies Standard HTML (W3 Standard) Java Applets
Contents Netscape NetHelp ActiveX Controls Help Elements
Approaching Help Testing Two-Tiered Testing Stand-alone Testing Interaction between the Application and the Help System Types of Help Errors
Testing Considerations Bibliography Chapter 16 Installation Tests Why Read This Chapter? Introduction The Roles of Installation/Uninstallation Programs Installer Uninstaller
Common Features and Options User Setup Options Installation Sources and Destinations Server Distribution Configurations Server-Side Installation Example Media Types Branching Options
Common Server-Side-Specific Installation Issues Installer/Uninstaller Testing Utilities Comparison-Based Testing Tools InControl4 and InControl5 Norton Utilities’ Registry Tracker and File Compare
Testing Considerations Bibliography and Additional Resources
358 358 359
361 361 361 361 361
365 366 367 367 368 369 369 371
372 372 373 373 374 378 379
384 387 387 387 387
388 394
Bibliography Additional Resources
394 394
Chapter 17 Configuration and Compatibility Tests Why Read This Chapter? Introduction The Test Cases Approaching Configuration and Compatibility Testing
395 395 396 397
Considering Target Users When to Run Compatibility and Configuration Testing Potential Outsourcing
Comparing Configuration Testing with Compatibility Testing Configuration/Compatibility Testing Issues COTS Products versus Hosted Systems Distributed Server Configurations
Chapter 18 Web Security Testing Why Read This Chapter? Introduction What Is Computer Security?
Security Goals From Which Threats Are We Protecting Ourselves? Common Sources of Security Threats What Is the Potential Damage?
Anatomy of an Attack
415 415 416 417
417 418 418 419
420
Information Gathering Network Scanning Attacking
420 422 423
Attacking Intents Security Solution Basics
423 424
Strategies, People, and Processes Education Corporate Security Policies Corporate Responses Authentication and Authorization Passwords Authentication between Software Applications or Components Cryptography Other Web Security Technologies Perimeter-Based Security: Firewalls, DMZs, and Intrusion Detection Systems Firewalls Setting Up a DMZ Intrusion Detection Systems (IDS)
Common Vulnerabilities and Attacks Software Bugs, Poor Design, and Programming Practice Buffer Overflows Malicious Input Data Command-Line (Shell) Execution Backdoors JavaScript CGI Programs Java ActiveX Cookies Spoofing
Contents Malicious Programs Virus and Worm Trojan Horses Misuse Access Privilege Attacks Password Cracking Denial-of-Service Attacks Physical Attacks Exploiting the Trust Computational Base Information Leaks Social Engineering Keystroke Capturing Garbage Rummaging Packet Sniffing Scanning and Probing Network Mapping Network Attacks
Testing Goals and Responsibilities Functionality Side Effect: An Error-Handling Bug Example
Testing for Security Testing the Requirements and Design Requirements Are Key Trusted Computational Base (TCB) Access Control Which Resources Need to Be Protected? Client Privacy Issues: What Information Needs to Be Private? Testing the Application Code Backdoors Exception Handling and Failure Notification ID and Password Testing Testing for Information Leaks Random Numbers versus Unique Numbers Testing the Use of GET and POST Parameter-Tampering Attacks SQL Injection Attacks Cookie Attacks Testing for Buffer Overflows Testing for Bad Data Reliance on Client-Side Scripting When Input Becomes Output Testing Third-Party Code Known Vulnerabilities Race Conditions Testing the Deployment Installation Defaults Default Passwords Internationalization Program Forensics Working with Customer Support Folks
Contents Penetration Testing Testing with User Protection via Browser Settings Testing with Firewalls The Challenges Testers Face
Other Testing Considerations Bibliography and Additional Resources Bibliography Additional Resources Useful Net Resources Tools
Chapter 19 Performance Testing Why Read This Chapter? Introduction Performance Testing Concepts Determining Acceptable Response Time or Acceptable User Experience Response Time Definition Performance and Load Stress Testing Definitions Searching for Answers A Simple Example
Performance Testing Key Factors Workload System Environment and Available Resources Response Time Key Factors Affecting Response Time or Performance
Three Phases of Performance Testing Setting Goals and Expectations and Defining Deliverables Gathering Requirements What Are You Up Against? What If Written Requirements Don’t Exist?
Defining the Workload Sizing the Workload Server-Based Profile User-Based Profile
Problems Concerning Workloads Selecting Performance Metrics Throughput Calculation Example
Which Tests to Run and When to Start Tool Options and Generating Loads Tool Options Analyzing and Reporting Collected Data Generating Loads
Writing the Test Plan Identifying Baseline Configuration and Performance Requirements Determining the Workload Determining When to Begin Testing
463 465 468 471
473 476 476 477 477 478
479 479 480 481 481 482 483 484 485
487 489 489 490 492
493 494 496 496 496
497 498 498 501
504 505 506
508 512 512 513 513
515 515 515 515
201006 FM.qxd
6/5/03
11:14 AM
Page xvii
Contents Determine Whether the Testing Process Will Be Hardware-Intensive or Software-Intensive Developing Test Cases
Testing Phase Generating Test Data Setting Up the Test Bed Setting Up the Test Suite Parameters Performance Test Run Example
Analysis Phase Other Testing Considerations Bibliography Chapter 20 Testing Mobile Web Applications Why Read This Chapter? Introduction Testing Mobile versus Desktop Web Applications Various Types of Tests Add-on Installation Tests Data Synchronization-Related Tests UI Implementation and Limited Usability Tests UI Guideline References Browser-Specific Tests Platform-Specific Tests Platform or Logo Compliance Tests Configuration and Compatibility Tests Connectivity Tests Devices with Peripheral Network Connections Latency Transmission Errors Transitions from Coverage to No-Coverage Areas Transitions between Data and Voice Data or Message Race Condition Performance Tests Security Tests Testing Web Applications Using an Emulation Environment Testing Web Applications Using the Physical Environment
Device and Browser Emulators Palm Computing OpenWave Nokia YoSpace Microsoft Web-Based Mobile Phone Emulators and WML Validators Desktop WAP Browsers
546 547 547 548 548 548 548 549
xvii
201006 FM.qxd
6/5/03
11:14 AM
Page xviii
xviii Contents Other Testing Considerations Bibliography and Additional Resources Bibliography Additional Resources
Chapter 21 Web Testing Tools Why Read This Chapter? Introduction Types of Tools Rule-Based Analyzers Sample List of Link Checkers and HTML Validators Sample List of Rule-Based Analyzers for C/C++, Java, Visual Basic, and Other Programming and Scripting Languages Load/Performance Testing Tools Web Load and Performance Testing Tools GUI Capture (Recording/Scripting) and Playback Tools Sample List of Automated GUI Functional and Regression Testing Tools Runtime Error Detectors Sample List of Runtime Error-Detection Tools Sample List of Web Security Testing Tools Java-Specific Testing Tools Other Types of Useful Tools Database Testing Tools Defect Management Tool Vendors QACity.Com Comprehensive List of DEFECT TRACKING Tool Vendors
Additional Resources On the Internet Development and Testing Tool Mail-Order Catalogs
Chapter 22 Finding Additional Information Why Read This Chapter? Introduction Textbooks Web Resources Useful Links Useful Magazines and Newsletters Miscellaneous Papers on the Web from Carnegie Mellon University’s Software Engineering Institute
Appendix F Web Test-Case Design Guideline: Input Boundary and Validation Matrix I
617
Appendix G Display Compatibility Test Matrix
621
Appendix H Browser OS Configuration Matrix
623
Index
625
xix
201006 FM.qxd
6/5/03
11:14 AM
Page xx
201006 FM.qxd
6/5/03
11:14 AM
Page xxi
Preface
Testing Applications on the Web introduces the essential technologies, testing concepts, and techniques that are associated with browser-based applications. It offers advice pertaining to the testing of business-to-business applications, business-to-end-user applications, Web portals, and other Internet-based applications. The primary audience is software testers, software quality engineers, quality assurance staff, test managers, project managers, IT managers, business and system analysts, and anyone who has the responsibility of planning and managing Web-application test projects. Testing Applications on the Web begins with an introduction to the clientserver and Web system architectures. It offers an in-depth exploration of Web application technologies such as network protocols, component-based architectures, and multiple server types from the testing perspective. It then covers testing practices in the context of various test types from user interface tests to performance, load, and stress tests, and security tests. Chapters 1 and 2 present an overview of Web testing. Chapters 3 through 6 cover methodology and technology basics, including a review of software testing basics, a discussion on networking, an introduction to component-based testing, and an overview of the mobile device platform. Chapters 7 through 9 discuss testing planning fundamentals, a sample application to be used as an application under test (AUT) throughout the book, and a sample test plan. Chapters 10 through 20 discuss test types that can be applied to Web testing. Finally, Chapters 21 and 22 offer a survey of Web testing tools and suggest where to go for additional information. Testing Applications on the Web answers testing questions such as, “How do networking hardware and software affect applications under test?” “What are Web application components, and how do they affect my testing strategies?”
xxi
201006 FM.qxd
xxii
6/5/03
11:14 AM
Page xxii
Preface
“What is the role of a back-end database, and how do I test for databaserelated errors?” “How do I test server-side software?” “What are performance, stress, and load tests, and how do I plan for and execute them?” “What do I need to know about security testing, and what are my testing responsibilities?” “What do I need to consider in testing mobile Web applications?” With a combination of general testing methodologies and the information contained in this book, you will have the foundation required to achieve these testing goals—maximizing productivity and minimizing quality risks in a Web application environment. Testing Applications on the Web assumes that you already have a basic understanding of software testing methodologies, including test planning, test-case design, and bug report writing. Web applications are complex systems that involve numerous components: servers, browsers, third-party software and hardware, protocols, connectivity, and much more. This book enables you to apply your existing testing skills to the testing of Web applications.
N OT E This book is not an introduction to software testing. If you are looking for fundamental software testing practices, you will be better served by reading Testing Computer Software, Second Edition, by Kaner, Cem, Jack Falk, and Hung Q. Nguyen (Wiley, 1999). For additional information on Web testing and other testing techniques and resources, visit www.QAcity.com.
We have enjoyed writing this book and teaching the Web application testing techniques that we use every day to test Web-based systems. We hope that you will find here the information you need to plan for and execute a successful testing strategy that enables you to deliver high-quality applications in an increasingly distributed-computing, market-driven, and time-constrained environment in this era of new technology.
201006 FM.qxd
6/5/03
11:14 AM
Page xxiii
Foreword
Writing about Web testing is challenging because the field involves the interdependence of so many different technologies and systems. It’s not enough to write about the client. Certainly, the client software is the part of the application that is the most visible to the customer, and it’s the easiest to write about (authors can just repackage the same old stuff published about applications in general. Hung, Michael, and Bob do provide client-side guidance, but their goal is to provide information that is specific to Web applications. (For more generic material, you can read Testing Computer Software, Second Edition, Wiley, 1999.) But client-side software is just the tip of the iceberg. The application displays itself to the end user as the client, but it does most of its work in conjunction with other software on the server-side, much of it written and maintained by third parties. For example, the application probably stores and retrieves data via third-party databases. If it sells products or services, it probably clears customer orders with the customer’s credit card company. It might also check its distributor for available inventory and its shippers for the cost of shipping the software to the customer. The Web application communicates with these third parties through network connections written by third parties. Even the user interface is only partially under the application developer’s control—the customer supplies the presentation layer: the browser, the music and video player, and perhaps various other multimedia plug-ins. The Web application runs on a broader collection of hardware and software platforms than any other type of application in history. Attributes of these platforms can change at any time, entirely outside of the knowledge or control of the Web application developer.
xxiii
201006 FM.qxd
6/5/03
11:14 AM
Page xxiv
xxiv Foreword
In Testing Applications on the Web, Nguyen, Hackett, and Johnson take this complexity seriously. In their view, a competent Web application tester must learn the technical details of the systems with which the application under test interacts. To facilitate this, they survey many of those systems, explaining how applications interact with them and providing testing tips. As a by-product of helping testers appreciate the complexity of the Web testing problem, the first edition of Testing Applications on the Web became the first book on gray-box testing. In so-called black-box testing, we treat the software under test as a black box. We specify the inputs, we look at the outputs, but we can’t see inside the box to see how it works. The black-box tester operates at the customer’s level, basing tests on knowledge of how the system should work. In contrast, the white-box tester knows the internals of the software, and designs tests with direct reference to the program’s source code. The gray-box tester doesn’t have access to the source code, but he or she knows much more about the underlying architecture and the nature of the interfaces between the application under test and the other software and the operating systems. The second edition continues the gray-box analysis by deepening the discussions in the first edition. It also adds several new chapters to address business-critical testing issues from server-side, performance- and applicationlevel security testing to the latest mobile Web application testing. A final strength of the book is the power of the real-world example. Hung Quoc Nguyen is the president of the company that published TRACKGEAR, a Webbased bug tracking system, enabling the authors can give us the inside story of its development and testing. This combination of a thorough and original presentation of a style of analysis, mixed with detailed insider knowledge is a real treat to read. It teaches us about thinking through the issues involved when the software under test interacts in complex ways with many other programs, and it gives the book a value that will last well beyond the specifics of the technologies described therein. Cem Kaner, J.D., Ph. D. Professor of Computer Sciences Florida Institute of Technology
201006 FM.qxd
6/5/03
11:14 AM
Page xxv
Acknowledgments
While it is our names that appear on the cover, over the years, many people have helped with the development of this book. We want to particularly thank Brian Lawrence, for his dedication in providing thorough reviews and critical feedback. We all thank Cem Kaner for his guidance, friendship, and generosity, and for being there when we needed him. We thank, too, Jesse Watkins-Gibbs for his work on examples and sample code, as well as for his technical expertise and his commitment to getting our book done. We would also like to thank our professional friends who took time out from their demanding jobs and lives to review and add comment on the book: Yannick Bertolus, George Hamblin, Elisabeth Hendrickson, Nematolah Kashanian, Pat McGee, Alberto Savoia, and Garrin Wong. We would like to thank our copyeditor Janice Borzendowski. We also want to thank the following people for their contributions (listed in alphabetical order): James L. Carr, William Coleman, Norm Hardy, Pam Hardy, Thomas Heinz, Chris Hibbert, Heather Ho, Brian Jones, Denny Nguyen, Kevin Nguyen, Wendy Nguyen, Steve Schuster, Kurt Thams, Anne Tran, Dean Tribble, and Joe Vallejo. Finally, we would like to thank our colleagues, students, and staff at LogiGear Corporation, for their discussions and evaluations of the Web testing training material, which made its way into this book. And thanks to our agent Claudette Moore of Moore Literacy Agency. Certainly, any remaining errors in the book are ours.
xxv
201006 FM.qxd
6/5/03
11:14 AM
Page xxvi
201006 FM.qxd
6/5/03
11:14 AM
Page xxvii
About the Authors
Hung Q. Nguyen is Founder, President, and CEO of LogiGear Corporation. Nguyen has held leadership roles in business management, product development, business development, engineering, quality assurance, software testing, and information technology. Hung is an international speaker and a regular contributor to industry publications. He is the original architect of TRACKGEAR, a Web-based defect management system. Hung also teaches software testing for the University of California at Berkeley and Santa Cruz Extension, and LogiGear University. Hung is the author of Testing Applications on the Web, First Edition (Wiley 2000); and with Cem Kaner and Jack Falk, he wrote the best-selling book Testing Computer Software (ITP/Wiley 1993/1999). He holds a Bachelor of Science in Quality Assurance from Cogswell Polytechnical College, and is an ASQCertified Quality Engineer and a member of the Advisory Council for the Department of Applied Computing and Information Systems at UC Berkeley Extension. You can reach Hung at [email protected]; or, to obtain more information about LogiGear Corporation and Hung’s work, visit www.logigear.com. Bob Johnson has been a software developer, tester, and manager of both development and testing organizations. With over 20 years of experience in software engineering, Bob has acquired key strengths in building applications on a variety of platforms. Bob’s career in software development ranges from Web programming to consulting on legal aspects of e-commerce to the requirement and review process. Whether working in test automation, Web security, or back-end server testing, Bob is at the forefront of emerging technologies.
xxvii
201006 FM.qxd
6/5/03
11:14 AM
Page xxviii
xxviii About the Authors
In addition to participating in the Los Altos Workshops on Software Testing (LAWST), Bob has written articles for IEEE Software, Journal of Electronic Commerce, and Software Testing and Quality Engineering. He can be reached at [email protected]. Michael Hackett is Vice President and a founding partner of LogiGear Corporation. He has over a decade of experience in software engineering and the testing of shrink-wrap and Internet-based applications. Michael has helped well-known companies release applications ranging from business productivity to educational multimedia titles, in English as well as a multitude of other languages. Michael has taught software testing for the University of California at Berkeley Extension, the Software Productivity Center in Vancouver, the Hong Kong Productivity Centre, and LogiGear University. Michael holds a Bachelor of Science in Engineering from Carnegie-Mellon University. He can be reached at [email protected].
02 201006 PP01.qxd
5/29/03
8:57 AM
Page 1
PA R T
One Introduction
02 201006 PP01.qxd
5/29/03
8:57 AM
Page 2
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 3
CHAPTER
1 Welcome to Web Testing
Why Read This Chapter? The goal of this book is to help you effectively plan for and conduct the testing of Web-based applications developed for fixed clients, systems in fixed locations such as desktop computers, as well as for mobile clients such as mobile phones, PDAs (Personal Digital Assistants), and portable computers. This book will be more helpful to you if you understand the philosophy behind its design. Software testing practices have been improving steadily over the past few decades. Yet, as testers, we still face many of the same challenges that we have faced for years. We are challenged by rapidly evolving technologies and the need to improve testing techniques. We are also challenged by the lack of research on how to test for and analyze software errors from their behavior, as opposed to at the source code level. We are challenged by the lack of technical information and training programs geared toward serving the growing population of the not-yet-well-defined software testing profession. Finally, we are challenged by limited executive management support as a result of management’s underestimation and lack of attention to the cost of quality. Yet, in
3
03 201006 Ch01.qxd
4
5/29/03
8:57 AM
Page 4
Chapter 1
today’s world of Internet time, the systems under test are getting more complex by the day, and resources and testing time are in short supply. The quicker we can get the information that we need, the more productive and more successful we will be at doing our job. The goal of this book is to help you do your job effectively. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ The Evolution of Software Testing ◆ The Gray-Box Testing Approach ◆ Real-World Software Testing ◆ Themes of This Book ◆ What’s New in the Second Edition
Introduction This chapter offers a historical perspective on the changing objectives of software testing. It touches on the gray-box testing approach and suggests the importance of having a balance of product design, both from the designer’s and the user’s perspective, and system-specific technical knowledge. It also explores the value of problem analysis to determine what to test, when to test, and where to test. Finally, this chapter will discuss what assumptions this book has about the reader.
The Evolution of Software Testing As the complexities of software development have evolved over the years, the demands placed on software engineering, information technology (IT), and software quality professionals have grown and taken on greater relevance. We are expected to check whether the software performs in accordance with its intended design and to uncover potential problems that might not have been anticipated in the design. We are expected to develop and execute more tests, faster, and more often. Test groups are expected to offer continuous assessment on the current state of the projects under development. At any given moment, we must be prepared to report explicit details of testing coverage and status, the health or stability of the current release, and all unresolved errors. Beyond that, testers are expected to act as user advocates. This often involves
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 5
Welcome to Web Testing
anticipating usability problems early in the development process so those problems can be addressed in a timely manner. In the early years, on mainframe systems, many users were connected to a central system. Bug fixing involved patching or updating the centrally stored program. This single fix would serve the needs of hundreds or thousands of individuals who used the system. As computing became more decentralized, minicomputers and microcomputers were run as stand-alone systems or on smaller networks. There were many independent computers or local area networks, and a patch to the code on one of these computers updated relatively fewer people. Mass-market software companies sometimes spent over a million dollars sending disks to registered customers just to fix a serious defect. Additionally, technical support costs skyrocketed. As the market has broadened, more people use computers for more things and rely more heavily on computers, hence the consequences of software defects has risen every year. It is impossible to find all possible problems by testing, but as the cost of failure has gone up, it has become essential to do riskbased testing. In a risk-based approach, you ask questions like these: ■■
Which areas of the product are so significant to the customer or so prone to serious failure that they must be tested with extreme care?
■■
For the average area, and for the program as a whole, how much testing is enough?
■■
What are the risks involved in leaving a certain bug unresolved?
■■
Are certain components so unimportant as to not merit testing?
■■
At what point can a product be considered adequately tested and ready for market?
■■
How much longer can the product be delayed for testing and fixing bugs before the market viability diminishes the return on investment?
Tracking bugs, analyzing and assessing their significance are priorities. Management teams expect development and IT teams, testing and quality assurance staff, to provide quantitative data regarding test coverage, the status of unresolved defects, and the potential impact of deferring certain defects. To meet these needs, testers must understand the products and technologies they test. They need models to communicate assessments of how much testing has been done in a given product, how deep testing will go, and at what point the product will be considered adequately tested. Given better understanding of testing information, we make better predictions about quality risks. In the era of the Internet, the connectivity that was lost when computing moved from the mainframe model to the personal computer (PC) model,
5
03 201006 Ch01.qxd
6
5/29/03
8:57 AM
Page 6
Chapter 1
in effect, has been reestablished. Personal computers are effectively networked over the Internet. Bug fixes and updated builds are made available— sometimes on a daily basis—for immediate download over the Internet. Product features that are not ready by ship date are made available later in service packs. The ability to distribute software over the Internet has brought down much of the cost that is associated with distributing some applications and their subsequent bug fixes. Although the Internet offers connectivity for PCs, it does not offer the control over the client environment that was available in the mainframe model. The development and testing challenges with the Graphical User Interface (GUI) and event-based processing of the PC are enormous because the clients attempt remarkably complex tasks on operating systems (OSs) as different as UNIX, Macintosh OS, Linux, and the Microsoft OSs. They run countless combinations of processors, peripherals, and application software. Additionally, the testing of an enterprise client-server system may require the consideration of thousands of combinations of OSs, modems, routers, and client-server software components and packages. Web applications increase this complexity further by introducing browsers and Web servers into the mix. Furthermore, wireless networks are becoming more pervasive, and their bandwidths continue to improve. On the client-side, computer engineers continue to advance in building smaller, yet more powerful portable or mobile devices. Communication and wearable devices and Internet appliances are expanding the possible combinations of Web client environments beyond the desktop environments. On the server-side, software components that were normally located in a company’s enterprise server are making their move toward the application services or Web services model. In this model, the components will be hosted outside of the corporate enterprise server, usually at the third-party ASPs (application service providers), adding more challenges to the testing of Internet-based systems. Software testing plays a more prominent role in the software development process than ever before. Developers are paying more attention to building testability into their code, as well as coming up with more ways to improve the unit-test framework around their production code. Companies are allocating more money and resources for testing because they understand that their reputations and success rest on the quality of their products, or that their failure is more probable due to poor product and service quality. The competitiveness of the computing industry (not to mention the savvy of most computer users) has eliminated most tolerance for buggy software. Nevertheless, many companies believe that the only way to compete in Internet time is to develop software as rapidly as possible. Short-term competitive issues often outweigh quality issues. One consequence of today’s accelerated development schedules is the
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 7
Welcome to Web Testing
industry’s tendency to push software out into the marketplace as early as possible. Development teams get less and less time to design, code, test, and undertake process improvements. Market constraints and short development cycles often do not allow time for reflection on past experience and consideration of more efficient ways to produce and test software.
The Gray-Box Testing Approach Black-box testing focuses on software’s external attributes and behavior. Such testing looks at an application’s expected behavior from the user’s point of view. White-box testing (also known as glass-box testing), on the other end of the spectrum, tests software with knowledge of internal data structures, physical logic flow, and architecture at the source code level. White-box testing looks at testing from the developer’s point of view. Both black-box and whitebox testing are critically important complements of a complete testing effort. Individually, they do not allow for balanced testing. Black-box testing can be less effective at uncovering certain error types, such as data-flow errors or boundary condition errors at the source level. White-box testing does not readily highlight macro-level quality risks in operating environment, compatibility, time-related errors, and usability. Gray-box testing incorporates elements of both black-box and white-box testing. It considers the outcome on the user end, system-specific technical knowledge, and operating environment. It evaluates application design in the context of the interoperability of system components. The gray-box testing approach is integral to the effective testing of Web applications because Web applications comprise numerous components, both software and hardware. These components must be tested in the context of system design to evaluate their functionality and compatibility. In our view, gray-box testing consists of methods and tools derived from the knowledge of the application internals and the environment with which it interacts. The knowledge of the designer’s intended logic can be applied in test design and bug analysis to improve the probability of finding and reproducing bugs. See Chapter 5, “Web Application Components,” the section entitled “Testing Discussion,” for an example of gray-box testing methods. Here are several unofficial definitions for gray-box testing from the Los Altos Workshop on Software Testing (LAWST) IX. (For more information on LAWST, visit www.kaner.com.) Gray-box testing—Using inferred or incomplete structural or design information to expand or focus black-box testing. —Dick Bender
7
03 201006 Ch01.qxd
8
5/29/03
8:57 AM
Page 8
Chapter 1
Gray-box testing—Tests designed based on the knowledge of algorithms, internal states, architectures, or other high-level descriptions of program behavior. —Doug Hoffman Gray-box testing—Tests involving inputs and outputs; but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of the view of the tester. — Cem Kaner Gray-box testing is well suited for Web application testing because it factors in high-level design, environment, and interoperability conditions. It will reveal problems that are not as easily considered by a black-box or white-box analysis, especially problems of end-to-end information flow and distributed hardware/software system configuration and compatibility. Context-specific errors that are germane to Web systems are commonly uncovered in this process. Another point to consider is that many of the types of errors that we run into in Web applications might well be discovered by black-box testers, if only we had a better model of the types of failures for which to look and design tests. Unfortunately, we are still developing a better understanding of the risks that are associated with the new application and communication architectures. Therefore, the wisdom of traditional books on testing (e.g., Testing Computer Software, 2nd ed., John Wiley & Sons, Inc. (1999), by Kaner, Falk, and Nguyen and Lessons Learned in Software Testing, John Wiley & Sons, Inc. (2002), by Kaner, Bach, and Pettichord) will not fully prepare the black-box tester to search for these types of errors. If we are equipped with a better understanding of the system as a whole, we’ll have an advantage in exploring the system for errors and in recognizing new problems or new variations of older problems. As testers, we get ideas for test cases from a wide range of knowledge areas. This is partially because testing is much more effective when we know and model after the types of bugs we are looking for. We develop ideas of what might fail, and of how to find and recognize such a failure, from knowledge of many types of things [e.g., knowledge of the application and system architecture, the requirements and use of this type of application (domain expertise), and software development and integration]. As testers of complex systems, we should strive to attain a broad balance in our knowledge, learning enough about many aspects of the software and systems being tested to create a battery of tests that can challenge the software as deeply as it will be challenged in the rough and tumble of day-to-day use. Finally, we are not suggesting that every tester in a group be a gray-box tester. We have seen a high level of success in several test teams that have a mix of different types of testers, with different skill sets (e.g., subject matter expert,
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 9
Welcome to Web Testing
database expert, security expert, API testing expert, test automation expert, etc.). The key is, within that mix, for at least some of the testers to understand the system as a collection of components that can fail in their interaction with each other, and these individuals must understand how to control and how to see those interactions in the testing and production environments.
Real-World Software Testing Web businesses have the potential to be high-profit ventures. In the dot-com era, venture capitalists could support a number of losing companies as long as they had a few winners to make up for their losses. A CEO has three to five years to get a start-up ready for IPO (six months to prove that the prototype works, one or two years to generate some revenue—hence, justifying the business model—and the remainder of the time to show that the business can be profitable someday). It is always a challenge to find enough time and qualified personnel to develop and deliver quality products in such a fast-paced environment. Although standard software development methodologies such as Capability Maturity Model (CMM) and ISO-9000 have been available, they are not yet well accepted by aggressive start-up companies. These standards and methods are great practices, but the fact remains that many companies will rely on the efforts of a skilled development and testing staff, rather than a process that they fear might slow them down. In that situation, no amount of improved standards and process efficiencies can make up for the efforts of a skilled development and testing staff. That is, given the time and resource constraints, they still need to figure out how to produce quality software. The main challenge that we face in Web application testing is to learn the associated technologies in order to have a better command over the environment. We need to know how Web technologies affect the interoperability of software components, as well as Web systems as a whole. Testers also need to know how to approach the testing of Web-based applications. This requires being familiar with test types, testing issues, common software errors, and the quality-related risks that are specific to Web applications. We need to learn, and we need to learn fast. Only with a solid understanding of software testing basics and a thorough knowledge of Web technologies can we competently test Web-based systems. The era of high-profit dot-com ventures has ended. Many businesses based on the dot-com model have closed their doors. The NASDAQ Stock Exchange within three years of March 2000, from its best performance of above 5,000 points, has dropped to as low as 1,100. We have learned many great and
9
03 201006 Ch01.qxd
10
5/29/03
8:57 AM
Page 10
Chapter 1
painful business lessons from this era, but unfortunately, not many qualityrelated lessons. However, the good news is, in a difficult economic time, the market demand is low, hence customers are in control. They want to pay less money for more, and demand higher-quality products and services. Bugs left in the product could mean anything from high support costs to loss of sales to contract cancellation. That is money loss that companies can normally absorb during the good economic times, but not in the bad times. We have seen many movements and initiatives in which executive staffs are paying more attention and asking more interesting quality-related questions. They are asking questions like, “How can we produce better software (higher-quality), faster (improve time-to-market) at a lower cost (lower production and quality costs)?” While this type of question has been asked before, executive staffs now have more time to listen and are more willing to support the QA effort. It also means a lot more demand is placed on testing, which requires QA staff to be more skilled in testing strategy, better educated in software engineering and technology, better equipped with testing practices, and savvier in the concept and application of software testing tools.
Themes of This Book The objective of this book is to introduce testers into the discipline of gray-box testing, by offering readers information about the interplay of Web applications, component architectural designs, and their network systems. We expect that this will help testers develop new testing ideas, enabling them to uncover and troubleshoot new types of errors and conduct more effective root-cause analyses of software failures discovered during testing or product use. The discussions in this book focus on determining what to test, where to test, and when to test. As appropriate, real-world testing experiences and examples of errors are included. To effectively plan and execute the testing of your Web application, you need to possess the following qualities: good software testing skill; knowledge of your application, which you will need to provide; knowledge of Web technologies; understanding of the types of tests and their applicability to Web application; knowledge of several types of Web application-specific errors (so you know what to look for); and knowledge of some of the available tools and their applicability, which this book offers you. (See Figure 1.1.) Based on this knowledge and skill set, you can analyze the testing requirements to come up with an effective plan for your test execution. If this is what you are looking for, this book is for you. It is assumed that you have a solid grasp of standard software testing practices and procedures.
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 11
Welcome to Web Testing
Your application knowledge
Testing skills
Knowledge in Web technologies
Analysis
Types of tests
Tools and applicability
Examples of errors
Test planning
Test execution
Error and reproducibility analysis
Figure 1.1 Testing skill and knowledge.
TESTER RESPONSIBILITIES ■■
Identifying high-risk areas that should be focused on in test planning
■■
Identifying, analyzing, and reproducing errors effectively within Web environments (which are prone to multiple environmental and technological variables)
■■
Capitalizing on existing errors to uncover more errors of the same class, or related classes
To achieve these goals, you must have high-level knowledge of Web environments and an understanding of how environmental variables affect the testing of your project. The information and examples included in this book will help you to do just that. There is one last thing to consider before reading on. Web applications are largely platform-transparent. However, most of the testing and error examples included in this book are based on Microsoft technologies. This allows us to draw heavily on a commercial product for real examples. While Hung was writing the first edition of this book, his company was developing TRACKGEAR, a Web-based bug-tracking solution that relies on Microsoft Web technologies. As the president of that company, Hung laid out the engineering issues that were considered in the design and testing of the product that testing authors cannot normally reveal (because of nondisclosure contracts) about software that they have developed or tested. Our expectation, however, is that the testing fundamentals should apply to technologies beyond Microsoft.
11
03 201006 Ch01.qxd
12
5/29/03
8:57 AM
Page 12
Chapter 1
What’s New in the Second Edition Much has changed since the first edition of this book. Technology in the digital world continues to move at a fast pace. At the time of this writing, the economy has gone into a recession (at least in the United States, if not elsewhere), which has lead to the demand of less buggy products and services. Consequently, there has been a more widespread call for better and faster testing methods. The second edition of Testing Web Applications is our opportunity to make improvements and corrections on the first edition that have been suggested by our readers over the years. The book is also enjoying the addition of the two co-authors, Bob Johnson and Michael Hackett, who bring with them much experience in Web testing strategies and practices, as well as training expertise, which help elevate the usefulness of the contents to the next level.
New Contents and Significant Updates ■■
A new chapter, entitled “Mobile Web Application Platform” (Chapter 6), that covers mobile Web application model, exploring the technological similarities and differences between a desktop and a mobile Web system. The chapter provides the Web mobile technology information necessary to prepare you for developing test plans and strategies for this new mobile device platform. There will also be a more in-depth discussion in a follow-up chapter, entitled “Testing Mobile Web Applications” (Chapter 20).
■■
A new chapter, entitled “Testing Mobile Web Applications” (Chapter 20), covers experience-based information that you can use in the development of test strategies, test plans, and test cases for mobile Web applications. This is the companion to Chapter 6.
■■
A new chapter, entitled “Using Scripts to Test” (Chapter 13), covers the use of scripts to execute tests, to help you analyze your test results, and to help you set up and clean up your system and test data.
■■
A new chapter, entitled “Server-Side Testing” (Chapter 12), covers both the testing of the application servers and testing the server side of the applications function that may never be accessible through the client.
■■
The first-edition Chapter 16, entitled “Performance, Load and Stress Tests,” comprising much of current Chapter 19 (“Performance Testing”), has been significantly updated with new information and tips on how to effectively design, plan for, and deploy performance related tests.
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 13
Welcome to Web Testing ■■
The first-edition Chapter 15, entitled “Web Security Concerns,” now entitled “Web Security Testing,” and comprising much of current Chapter 18, has been significantly updated with new information on common software and Web sites vulnerabilities and tips on how to test for software-specific security bugs.
What Remains from the First Edition ■■
We have worked hard to keep the organizational layout that was well received by the first-edition readers intact.
■■
We continue our commitment to offering the technology-related information that has the most impact to testers in the clear and pleasant-toread writing style, with plenty of visuals.
■■
QACity.Com will continue to be the Internet resource for busy testers, tracking all the Internet links referenced in this book. Given that the Internet changes by the minute, we expect that some of the links referenced will be outdated at some point. We are committed to updating QACity.Com on an ongoing basis to ensure that the information is there and up-to-date when you need it.
13
03 201006 Ch01.qxd
5/29/03
8:57 AM
Page 14
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 15
CHAPTER
2 Web Testing versus Traditional Testing
Why Read This Chapter? Web technologies require new testing and bug analysis methods. It is assumed that you have experience in testing applications in traditional environments; what you may lack, however, is the means to apply your experience to Web environments. To effectively make such a transition, you need to understand the technology and architecture differences between traditional testing and Web testing. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ The Application Model ◆ Hardware and Software Differences ◆ The Differences between Web and Traditional Client-Server Systems ◆ Web Systems ◆ Bug Inheritance
(continued)
15
04 201006 Ch02.qxd
16
5/29/03
8:57 AM
Page 16
Chapter 2 TOPICS COVERED IN THIS CHAPTER (continued) ◆ Back-End Data Accessing ◆ Thin-Client versus Thick-Client Processing ◆ Interoperability Issues ◆ Testing Considerations ◆ Bibliography
Introduction This chapter presents the application model and shows how it applies to mainframes, PCs, and, ultimately, Web/client-server systems. It explores the technology differences between mainframes and Web/client-server systems, as well as the technology differences between PCs and Web/client-server systems. Testing methods that are suited to Web environments are also discussed. Although many traditional software testing practices can be applied to the testing of Web-based applications, there are numerous technical issues that are specific to Web applications that need to be considered.
The Application Model A computer system, which consists of hardware and software, can receive inputs from the user, then stores them somewhere, whether in volatile memory such as RAM (Random Access Memory), or in nonvolatile memory, such as hard disk memory. It can execute the instructions given by software by performing computation using the CPU (Central Processing Unit) computing power. Finally, it can process the outputs back to the user. Figure 2.1 illustrates how humans interact with computers. Through a user interface (UI), users interact with an application by offering input and receiving output in many different forms: query strings, database records, text forms, and so on. Applications take input, along with requested logic rules, and store them in memory, and then manipulate data through computing; they also perform file reading and writing (more input/output and data storing). Finally, output results are passed back to the user through the UI. Results may also be sent to other output devices, such as printers.
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 17
Web Testing versus Traditional Testing
Human
USER
USER INTERFACE
SW/HW
INPUT Data entries Data requests Data rules
LOGIC/RULES Manipulate data
OUTPUT Feedback Requested data
FILE SYSTEMS Read/write/store data
Database or file-based system
Figure 2.1 The application model.
In traditional mainframe systems, as illustrated in Figure 2.2, all of an application’s processes, except for UI controls, occur on the mainframe computer. User interface controls take place on dumb terminals that simply echo text from the mainframe. Little computation or processing occurs on the terminals themselves. The network connects the dumb terminals to the mainframe. Dumb-terminal UIs are text-based or form-based (nongraphical). Users send data and commands to the system via keyboard inputs. Desktop PC systems, as illustrated in Figure 2.3, consolidate all processes— from UI through rules to file systems—on a single physical box. No network is required for a desktop PC. Desktop PC applications can support either a textbased UI (command-line) or a Graphical User Interface (GUI). In addition to keyboard input events, GUI-based applications also support mouse input events such as click, double-click, mouse-over, drag-and-drop, and so on.
Human
USER
USER INTERFACE
SW/HW
INPUT Data entries Data requests Data rules
LOGIC/RULES Manipulate data
OUTPUT Feedback Requested data
Dumb Terminal Echoing Text
Figure 2.2 Mainframe systems.
FILE SYSTEMS Read/write/store data
Database or file-based system
Mainframe
17
04 201006 Ch02.qxd
18
5/29/03
8:57 AM
Page 18
Chapter 2
Human
USER
USER INTERFACE
SW/HW
INPUT Data entries Data requests Data rules OUTPUT Feedback Requested data
LOGIC/RULES Manipulate data
FILE SYSTEMS Read/write/store data
Database or file-based system
Desktop PC Text or GUI
Figure 2.3 Desktop PC systems.
Client-server systems, upon which Web systems are built, require a network and at least two machines to operate: a client computer and a server computer, which serves requested data to the client computer. With the vast majority of Web applications, a Web browser serves as the UI container on the client computer. The server receives input requests from the client and manipulates the data by applying the application’s business logic rules. Business logic rules are the computations that an application is designed to carry out based on user input—for example, sales tax might be charged to any e-commerce customer who enters a California mailing address. Another example might be that customers over age 35 who respond to a certain online survey will be mailed a brochure automatically. This type of activity may require reading or writing to a database. Data is sent back to the client as output from the server. The results are then formatted and displayed in the client browser. The client-server model, and consequently the Web application model, is not as neatly segmented as that of the mainframe and the desktop PC. In the client-server model, not only can either the client or the server handle some of the processing work, but server-side processes can be divided between multiple physical boxes or computers (application server, Web server, database server, etc.). Figure 2.4, one of many possible client-server models, depicts I/O and logic rules handled by an application server (the server in the center), while a database server (the server on the right) handles data storage. The dotted lines
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 19
Web Testing versus Traditional Testing
in the illustration indicate processes that may take place on either the clientside or the server-side. See Chapter 5, “Web Application Components,” for information regarding server types. A Web system may comprise any number of physical server boxes, each handling one or more service types. Later in this chapter, Table 2.1 illustrates some of the possible three-box server configurations. Note that the example is relatively a basic system. A Web system may contain multiple Web servers, application servers, and multiple database servers (such as a server farm, a grouping of similar server types that share workload). Web systems may also include other server types, such as e-mail servers, chat servers, e-commerce servers, and user profile servers (see Chapter 5 for more information). Understanding how your Web application-under-test is structured is invaluable for bug analysis—trying to reproduce a bug or learning how you can find more bugs similar to the one that you are seeing. Keep in mind that it is software, not hardware, that defines clients and servers. Simply put, clients are software programs that request services from other software programs on behalf of users. Servers are software programs that offer services. Additionally, client-server is also an overloaded term; it is only useful from the perspective of describing a system. A server may, and often does, become a client in the chain of requests. In addition, the server-side may include many applications and systems including mainframe systems.
Human
USER
USER INTERFACE
SW/HW
INPUT Data entries Data requests Data rules
LOGIC/RULES Manipulate data
OUTPUT Feedback Requested data
Desktop PC Text or GUI
Figure 2.4 Client-server systems.
FILE SYSTEMS Read/write/store data
Database or file-based system
Server
Server
19
04 201006 Ch02.qxd
20
5/29/03
8:57 AM
Page 20
Chapter 2
Hardware and Software Differences Mainframe systems (Figure 2.5) are traditionally controlled environments, meaning that hardware and software are primarily supported (it does not necessarily mean that all the subcomponents are produced by the same company), end to end, by the same manufacturer. A mainframe with a single operating system, and applications sold and supported by the same manufacturer, can serve multiple terminals from a central location. Compatibility issues are more manageable compared to the PC and client-server systems. A single desktop PC system consists of mixed hardware and software— multiple hardware components built and supported by different manufacturers, multiple operating systems, and nearly limitless combinations of software applications. Configuration and compatibility issues become difficult or almost impossible to manage in this environment. A Web system consists of many clients, as well as server hosts (computers). The system’s various flavors of hardware components and software applications begin to multiply. The server-side of Web systems may also support a mixture of software and hardware and, therefore, are more complex than mainframe systems, from the configuration and compatibility perspectives. See Figure 2.6 for an illustration of a client-server system running on a local area network (LAN).
Mainframe
Dumb Terminal
Dumb Terminal
Dumb Terminal
Dumb Terminal
Figure 2.5 Controlled hardware and software environment.
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 21
Web Testing versus Traditional Testing
Internet Cloud DSU/CSU
Router IBM AS/400 UNIX Workstation
Hubs Intel Laptop
UNIX Server NT Server Macintosh Figure 2.6 A client-server system on a LAN.
The GUI of the PC makes multiple controls available on screen at any given time (e.g., menus, pull-down lists, help screens, pictures, and command buttons.). Consequently, event-driven browsers (in event-driven model, inputs are driven by events such as a mouse click or a keypress on the keyboard) are also produced, taking advantage of the event-handling feature offered by the operating system (OS). However, event-based GUI applications (data input coupled with events) are more difficult to test. For example, each event applied to a control in a GUI may affect the behavior of other controls. Also, special dependencies can exist between GUI screens; interdependencies and constraints must be identified and tested accordingly.
21
04 201006 Ch02.qxd
22
5/29/03
8:57 AM
Page 22
Chapter 2
The Differences between Web and Traditional Client-Server Systems The last two sections point out the application architecture and hardware and software differences among the mainframe, PC, and Web/client-server systems. We will begin this section by exploring additional differences between Web and traditional systems so that appropriate testing considerations can be formulated.
Client-Side Applications As illustrated in Figure 2.7, most client-server systems are data-access-driven applications. A client typically enables users, through the UI, to send input data, receive output data, and interact with the back end (for example, sending a query command). Clients of traditional client-server systems are platformspecific. That is, for each supported client operating system (e.g., Windows 16- and 32-bit, Solaris, Linux, Macintosh, etc.), a client application will be developed and tested for that target operating system. Most Web-based systems are also data-access-driven applications. The browser-based clients are designed to handle similar activities to those supported by a traditional client. The main difference is that the Web-based client is operating within the Web browser’s environment. Web browsers consist of operating system-specific client software running on a client computer. It renders HyperText Markup Language (HTML), as well as active contents, to display Web page information. Several popular browsers also support active content such as client-side scripting, Java applet, ActiveX control, eXtensible Markup Language (XML), cascading style sheet (CSS), dynamic HTML (DHTML), security features, and other goodies. To do this, browser vendors must create rendering engines and interpreters to translate and format HTML contents. In making these software components, various browsers and their releases introduce incompatibility issues. See Chapter 10, “User Interface Tests,” and Chapter 17, “Configuration and Compatibility Tests,” for more information. From the Web application producer’s perspective, there is no need to develop operating-system-specific clients since the browser vendors have already done that (e.g., Netscape, Microsoft, AOL, etc.). In theory, if your HTML contents are designed to conform to HTML 4 standard, your client application should run properly in any browser that supports HTML 4 standard from any vendor. But in practice, we will find ourselves working laboriously to address vendor-specific incompatibility issues introduced by each browser and its various releases. At the writing of this book, the golden rule is: “Web browsers are not created equal.”
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 23
Web Testing versus Traditional Testing Client App Windows 16-bit
Client App Windows 32-bit
Win16
Win32
Client App Solaris Client
Solaris
Client App
Develop and test four platformspecific clients.
Macintosh Client
Macintosh
SERVERS HTML Contents
Windows 9x BROWSERS - Microsoft Internet Explorer - Netscape Navigator - Others
HTML Contents
Windows NT BROWSERS - Microsoft Internet Explorer - Netscape Navigator - Others
HTML Contents
Solaris BROWSERS - Microsoft Internet Explorer - Netscape Navigator - Others
HTML Contents
Macintosh BROWSERS - Microsoft Internet Explorer - Netscape Navigator - Others
Develop and test HTML contents to be sent to browsers.
HTML Contents
Browser vendors are responsible for producing platform-specific browsers. Figure 2.7 Client-server versus Web-based clients.
In addition to the desktop client computer and browser, there are new types of clients and browsers, which are a lot smaller than the desktop PC version. These clients are often battery-powered, rather than wall-electric-powered as is a desktop PC. These clients are mobile devices including PDAs (Personal Digital Assistants), smart phones, and handheld PCs. Since these devices represent another class of client computers, there are some differences in the mobile application model. For simplicity, in this chapter, we will only refer to the desktop PC in the client discussion. Read Chapter 6, “Mobile Web Application Platform,” for discussions on the mobile client application model.
Event Handling In the GUI and event-driven model, inputs, as the name implies, are driven by events. Events are actions taken by users, such as mouse movements and clicks,
23
04 201006 Ch02.qxd
24
5/29/03
8:57 AM
Page 24
Chapter 2
or the input of data through a keyboard. Some objects (e.g., a push button) may receive mouse-over events whenever a mouse passes over them. A mouse single-click is an event. A mouse double-click is a different kind of event. A mouse click with a modifier key, such as Ctrl, is yet another type of event. Depending on the type of event applied to a particular UI object, certain procedures or functions in an application will be executed. In an event-driven environment, this is a type of procedure referred to as event-handling code. Testing event-driven applications is more complicated because it’s very labor-intensive to cover the testing of many combinations and sequences of events. Simply identifying all possible combinations of events can be a challenge because some actions trigger multiple events. Browser-based applications introduce a different flavor of event-handling support. Because Web browsers were originally designed as a data presentation tool, there was no need for interactions other than single-clicking for navigation and data submission, and mouse-over ALT attribute for an alternate description of graphic. Therefore, standard HTML controls such as form-based control and hyperlinks are limited to single-click events. Although scriptbased events can be implemented to recognize other events such as doubleclicking and drag-and-drop, it’s not natural in the Web-based user interface to do so (not to mention that those other events also cause incompatibility problems among different browsers. In Web-based applications, users may click links that generate simulated dialog boxes (the server sends back a page that includes tables, text fields, and other UI objects). Users may interact with browser-based UI objects in the process of generating input for the application. In turn, events are generated. Some of the event-handling code is in scripts that are embedded in the HTML page and executed on the client-side. Others are in UI components (such as Java applets and ActiveX controls) embedded in the HTML page and executed on the client-side. Still others are executed on the server-side. Understanding where (client- or server-side) each event is handled enables you to develop useful test cases as well as reproduce errors effectively. Browser-based applications offer very limited keyboard event support. You can navigate within the page using Tab and Shift-Tab keys. You can activate a hyperlink to jump to another link or push a command button by pressing the Enter key while the hyperlink text, graphic, or a button is highlighted. Supports for keyboard shortcuts and access keys, such as Alt-[key] or Ctrl-[key], are not available for the Web applications running in the browser’s environment, although they are available for the browser application itself. Another event-handling implication in browser-based applications is in the one-way request and submission model. The server generally does not receive commands or data until the user explicitly clicks a button, such as Submit to submit form data; or the user may request data from the server by clicking a link.
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 25
Web Testing versus Traditional Testing
This is referred to as the explicit submission model. If the user simply closes down a browser but does not explicitly click on a button to save data or to log off, data will not be saved and the user is still considered logged on (on the server-side). TEST CASE DEVELOPMENT TIPS Based on the knowledge about the explicit submission model, try the following tests against your application under test: TEST #1 ◆ Use your valid ID and password to log on to the system. ◆ After you are in, close your browser instead of logging off. ◆ Launch another instance of the browser and try to log in again. What
happens? Does the system complain that you are already in? Does the system allow you to get in and treat it as if nothing has happened? Does the system allow you to get in as a different instance of the same user? TEST #2 Suppose that your product has a user-license restriction: That is, if you have five concurrent-user licenses, only five users can be logged on to the system concurrently. If the sixth user tries to log on, the system will block it and inform the sixth user that the application has run out of concurrent-user licenses. ◆ Use your valid set of ID and password to log on to the system as six sep-
arate users. As the sixth user logs on, you may find that the system will detect the situation and block the sixth user from logging on. ◆ Close all five browsers that have already logged on instead of logging off. ◆ Launch another instance of the browser and try to log on again. Does the
system still block this log on because it thinks that there are five users already on the system? TEST #3 ◆ Use your valid ID and password to log on to the system. ◆ Open an existing record as if you are going to update it. ◆ Close your browser instead of logging off. ◆ Launch another instance of the browser and try to log on again. ◆ Try to open the same record that you opened earlier in the previous
browser. What happens? Is the record locked? Are you allowed to update the record anyway?
25
04 201006 Ch02.qxd
26
5/29/03
8:57 AM
Page 26
Chapter 2
By the way, there is no right or wrong answer for the three preceding tests because we don’t know how your system is designed to handle user session, user authentication, and record locking. The main idea is to present you with some of the possible interesting scenarios that might affect your application under test due to the nature of the explicit submission model in Web-based applications.
Application Instance and Windows Handling Standard event-based applications may support multiple instances, meaning that the same application can be loaded into memory many times as separate processes. Figure 2.8 shows two instances of Microsoft Word application. Similarly, multiple instances of a browser can run simultaneously. With multiple browser instances, users may be able to log in to the same Web-based application and access the same data table—on behalf of the same user or different users. Figure 2.9 illustrates two browser instances, each accessing the same application and data using the same or different user ID and password. From the application’s perspective, keeping track of multiple instances, the data, and the users who belong to each instance can be problematic. For example, a regular user has logged in using one instance of the browser. An Admin user has also logged in to the same system using another instance for the browser. It’s common that the application server may mistakenly receive data from and send data to one user thinking that the data belongs to the other users. Test cases that uncover errors surrounding multiple-instance handling should be thoroughly designed and executed.
Figure 2.8 Multiple application instances.
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 27
Web Testing versus Traditional Testing
Figure 2.9 Multiple application windows.
Within the same instance of a standard event-based application, multiple windows may be opened simultaneously. Data altered in one of an application’s windows may affect data in another of the application’s windows. Such applications are referred to as multiple document interface (MDI) applications (Figure 2.10). Applications that allow only one active window at a time are known as single document interface (SDI) applications (Figure 2.11). SDI applications allow users to work with only one document at a time. Microsoft Word (Figure 2.10) is an example of an MDI application. Notepad (Figure 2.11) is an example of an SDI application.
Figure 2.11 Single document interface (SDI) application.
Multiple document interface applications are more interesting to test because they might fail to keep track of events and data that belong to multiple windows. Test cases designed to uncover errors caused by the support of multiple windows should be considered. Multiple document interface or multiple windows interface are only available for clients in a traditional client-server system. The Web browser interface is considered flat because it can only display one page at the time. There is no hierarchical structure for the Web pages, therefore, one can easily jump to several links and quickly lose track of the original position.
UI Controls In essence, an HTML page that is displayed by a Web browser consists of text, hyperlinks, graphics, frames, tables, forms, and balloon help text (ALT tag). Basic browser-based applications do not support dialog boxes, toolbars, status bars, and other common UI controls. Extra effort can be made to take advantage of Java applets, ActiveX controls, scripts, CSS, and other helper applications to go beyond the basic functionality. However, there will be compatibility issues among different browsers.
Web Systems The complexities of the PC model are multiplied exponentially in Web systems (Figure 2.12). In addition to the testing challenges that are presented by multiple client PCs and mobile devices, the server-side of Web systems involves hardware of varying types and a software mix of OSs, service processes, server packages, and databases.
Internet Intranet Extranet
Back Office/ERP Oracle PeopleSoft SAP Seibel
Application Server Allaire BEA WebLogic IBM WebSphere MS ASP Sun NetDynamics Middleware BEA Tuxedo CORBA Microsoft DCOM eCommerce Server Ariba BroadVision Calico Vignette
Database IBM DB2 Informix Microsoft SQL Oracle Sybase
5/29/03
Figure 2.12 Web system architecture.
Operating Systems Linux Macintosh UNIX Windows
WEB BROWSERS AOL Arina Cyberdog Explorer Mosaic Navigator OmniWeb Opera Lynx Web-TV
04 201006 Ch02.qxd Page 29
Web Testing versus Traditional Testing 29
04 201006 Ch02.qxd
30
5/29/03
8:57 AM
Page 30
Chapter 2
Hardware Mix With Web systems and their mixture of many brands of hardware to support, the environment can become very difficult to control. Web systems have the capacity to use machines of different platforms, such as UNIX, Linux, Windows, and Macintosh boxes. A Web system might include a UNIX server that is used in conjunction with other servers that are Linux, Windows-based, or Macintosh-based. Web systems may also include mixtures of models from the same platform (on both the client- and server-sides). Such hardware mixtures present testing challenges because different computers in the same system may employ different OSs, CPU speeds, buses, I/O interfaces, and more. Each combination has the potential to cause problems.
Software Mix At the highest level, as illustrated in Figure 2.12, Web systems may consist of various OSs, Web servers, application servers, middleware, e-commerce servers, database servers, major enterprise resource planning (ERP) suites, firewalls, and browsers. Application development teams often have little control over the kind of environment into which their applications are installed. In producing software for mainframe systems, development was tailored to one specific system. Today, for Web systems, software is often designed to run on a wide range of hardware and OS combinations, and risks of software incompatibility are always present. An example is that different applications may not share the same version of a database server. On the Microsoft platform, a missing or incompatible DLL (dynamic link library) is another example. (Dynamic link libraries are software components that can exist on both the client- and server-sides whose functions can be called by multiple programs on demand.) Another problem inherent in the simultaneous use of software from multiple vendors is that when each application undergoes a periodic upgrade (client- or server-side), there is a chance that the upgrades will not be compatible with preexisting software. A Web system software mix may include any combination of the following: ■■
Multiple operating systems
■■
Multiple software packages
■■
Multiple software components
■■
Multiple server types, brands, and models
■■
Multiple browser brands and versions
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 31
Web Testing versus Traditional Testing
Server-Based Applications Server-based applications are different from client applications in two ways. First, server-based applications are programs that don’t have a UI with which the end users of the system interact. End users only interact with the clientside application. In turn, the client interacts with server-based applications to access functionality and data via communication protocols, application programming interface (API), and other interfacing standards. Second, serverbased applications run unattended; that is, when a server-based application is started, it’s intended to stay up, waiting to provide services to client applications whether there is any client out there requesting services. In contrast, to use a client application, an end user must explicitly launch the client application and interact with it via a UI. Therefore, to black-box testers, server-based applications are black boxes. You may ask: “So it is with desktop applications. What’s the big deal?” Here is an example: When a failure is caused by an error in a client-side or desktop application, the users or testers can provide essential information that helps reproduce or analyze the failure because they are right in front of the application. Server-based applications or systems are often isolated from the end users. When a server-based application fails, as testers or users from the clientside, we often don’t know when it failed, what happened before it failed, who was or how many users were on the system at the time it failed, and so on. This makes reproducing bugs even more challenging for us. In testing Web systems, we need a better way to track what goes on with applications on the server-side. One of the techniques used to enhance our failure reproducibility capability is event logging. With event logging, server-based applications can record activities to a file that might not be normally seen by an end user. When an application uses event logging, the recorded information that is saved can be read in a reliable way. Operating systems often include logging utilities. For example, Microsoft Windows 2000 includes the Event Viewer, which enables users to monitor events logged in the Application (the most interesting for testing), Security, and System logs. The Application log allows you to track events generated by a specific application. For example, you might want the log file to read and write errors generated by your application. The Application log will allow you to do so. You can create and include additional logging capabilities to your application under test to facilitate the defect analysis and debugging process, should your developers and your test teams find value in them. (Refer to the “Server-Side Testing Tips” section of Chapter 12, “Server-Side Testing,” for more information on using log files.) Have discussions with your developers to determine how event logging can be incorporated into or created to support the testing process.
31
04 201006 Ch02.qxd
32
5/29/03
8:57 AM
Page 32
Chapter 2
Distributed Server Configurations Server software can be distributed among any number of physical server boxes, which further complicates testing. Table 2.1 illustrates several possible server configurations that a Web application may support. You should identify the configurations that the application under test claims to support. Matrices of all possible combinations should be developed. Especially for commercialoff-the-shelf server applications, testing should be executed on each configuration to ensure that application features are intact. Realistically, this might not be possible due to resource and time constraints. For more information on testing strategies, see Chapter 17. Table 2.1 Distributed Server Configurations BOX 1 One-box model
BOX 2
BOX 3
NT-based Web server NT-based application server NT-based database server
Two-box model
NT-based Web server
NT-based database server
NT-based application server Three-box model
One-box model
NT-based Web server
NT-based Web server
NT-based application server
NT-based application server
UNIX-based database server
UNIX-based Web server UNIX-based application server UNIX-based database server
Two-box model
UNIX-based Web server
UNIX-based database server
UNIX-based application server (continued)
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 33
Web Testing versus Traditional Testing Table 2.1 (continued)
Three-box model
BOX 1
BOX 2
BOX 3
NT-based Web server
NT-based Web server
NT-based database server
NT-based application server
NT-based application server
The Network The network is the glue that holds Web systems together. It connects clients to servers and servers to servers. This variable introduces new testing issues, including reliability, accessibility, performance, security, configuration, and compatibility. As illustrated in Figure 2.12, the network traffic may consist of several protocols supported by the TCP/IP network. It’s also possible to have several networks using different net OSs connecting to each other by gateways. Testing issues related to the network can be a challenge or beyond the reach of black-box testing. However, understanding the testing-related issues surrounding the network enables us to better define testing problems and ask for appropriate help. (See Chapter 4, “Networking Basics,” for more information.) In addition, with the proliferation of mobile devices, wireless networks are becoming more popular. It is useful to also have a good understanding of how a wireless network may affect your Web applications, especially mobile Web applications. (See Chapter 6, “Mobile Web Application Platform,” for an overview of wireless network.)
Bug Inheritance It is common for Web applications to rely on preexisting objects or components. Therefore, the newly created systems inherit not just the features but also the bugs that existed in the original objects. One of the important benefits of both object-oriented programming (OOP) and component-based programming is reusability. Rather than writing the code from scratch, a developer can take advantage of preexisting features created by other developers (with use of the application programming interface or the API and proper permission) by incorporating those features into his or her own application. In effect, code is recycled, eliminating the need to rewrite existing code. This model helps accelerate development time, reduces the amount of code that needs to be written, and maintains consistency between applications.
33
04 201006 Ch02.qxd
34
5/29/03
8:57 AM
Page 34
Chapter 2
The potential problem with this shared model is that bugs are passed along with components. Web applications, due to their component-based architecture, are particularly vulnerable to the sharing of bugs. At the lowest level, the problem has two major impacts on testing. First, existing objects or components must be tested thoroughly before other applications or objects can use their functionality. Second, regression testing must be executed comprehensively (see “Regression Testing” in Chapter 3, “Software Testing Basics,” for more information). Even a small change in a parent object can alter the functionality of an application or object that uses it. This problem is not new. Object-oriented programming and componentbased software have long been used in PCs. With the Web system architecture, however, the problem is multiplied due to the fact that components are shared across servers on a network. The problem is exacerbated by the demand that software be developed in an increasingly shorter time. At the higher level, bugs in server packages, such as Web servers and database servers, and bugs in Web browsers themselves, will also have an effect on the software under test (see Chapter 5 “Web Application Component,” for more information). This problem has a greater impact on security risks. Chapter 18, “Web Security Testing,” contains an example of a Buffer Overflow error.
Back-End Data Accessing Data in a Web system is often distributed; that is, it resides on one or more (server) computers rather than the client computer. There are several methods of storing data on a back-end server. For example, data can be stored in flat files, in a nonrelational database, in a relational database, or in an objectoriented database. In a typical Web application system, it’s common that a relational database is employed so that data accessing and manipulation can be more efficient compared to a flat-file database. In a flat-file system, when a query is initiated, the results of that query are dumped into files on a storage device. An application then opens, reads, and manipulates data from these files and generates reports on behalf of the user. To get to the data, the applications need to know exactly where files are located and what their names are. Access security is usually imposed at the application level and file level. In contrast, a database, such as a relational database, stores data in tables of records. Through the database engine, applications access data by getting a set of records without knowing where the physical data files are located or what they are named. Data in relational databases are accessed via database names (not to be mistaken with file names) and table names. Relational database files can be stored on multiple servers. Web systems using a relational database can impose security at the application server level, the database server level, as
04 201006 Ch02.qxd
5/29/03
8:57 AM
Page 35
Web Testing versus Traditional Testing
well as at table and user-based privilege level. All of this means that testing back-end data accessing by itself is a big challenge to testers, especially blackbox testers because they do not see the activities on the back end. It makes reproducing errors more difficult and capturing test coverage more complicated. See Chapter 14, “Database Tests,” for more information on testing backend database accessing.
Thin-Client versus Thick-Client Processing Thin-client versus thick-client processing is concerned with where applications and components reside and execute. Components may reside on a client machine and on one or more server machines. The two possibilities are: Thin client. With thin-client systems, the client PC or mobile device does very little processing. Business logic rules are executed on the server side. Some simple HTML Web-based applications and handheld devices utilize this model. This approach centralizes processing on the server and eliminates most client-side incompatibility concerns. (See Table 2.2.) Thick client. The client machine runs the UI portion of the application as well as the execution of some or all of the business logic. In this case, the browser not only has to format the HTML page, but it also has to execute other components such as Java applet and ActiveX. The server machine houses the database that processes data requests from the client. Processing is shared between client and server. (See Table 2.3.)
Table 2.2 Thin Client DESKTOP PC THIN CLIENT
SERVER
UI
Application rules Database
Table 2.3 Thick ClientDEDESKTOP PC DESKTOP PC THICK CLIENT
SERVER
UI
Database
Application rules
SERVER PC
SERVER
35
04 201006 Ch02.qxd
36
5/29/03
8:57 AM
Page 36
Chapter 2
The client doing much of a system’s work (e.g., executing business logic rules, DHTML, Java applets, ActiveX controls, or style sheets on the clientside) is referred to as thick-client processing. Thick-client processing relieves processing strain on the server and takes full advantage of the client’s processor. With thick-client processing, there are likely to be more incompatibility problems on the client-side. Thin-client versus thick-client application testing issues revolve around the compromises among feature, compatibility, and performance issues. For more information regarding thin-client versus thick-client application, please see Chapter 5.
Interoperability Issues Interoperability is the ability of a system or components within a system to interact and work seamlessly with other systems or other components. This is normally achieved by adhering to certain APIs, communication protocol standards, or to interface-converting technology such as Common Object Request Broker Architecture (CORBA) or Distributed Common Object Model (DCOM). There are many hardware and software interoperability dependencies associated with Web systems, so it is essential that our test-planning process include study of the system architectural design. Interoperability issues—it is possible that information will be lost or misinterpreted in communication between components. Figure 2.13 shows a simplified Web system that includes three box servers and a client machine. In this example, the client requests all database records from the server-side. The application server in turn queries the database server. Now, if the database server fails to execute the query, what will happen? Will the database server tell the application server that the query has failed? If the application server gets no response from the database server, will it resend the query? Possibly, the application server will receive an error message that it does not understand. Consequently, what message will be passed back to the client? Will the application server simply notify the client that the request must be resent? Or will it neglect to inform the client of anything at all? All of these scenarios need to be investigated in the study of the system architectural design.
5/29/03
8:57 AM
Page 37
Web Testing versus Traditional Testing CLIENT-SIDE
NETWORK
SERVER-SIDE
Operating System
Operating System
Web Browser
Web Server
Client-based Components
Application Server
Operating System Application Server
Operating System
TCP/IP Traffic Database
04 201006 Ch02.qxd
SQL DB Stored Procedures
Data
Figure 2.13 Interoperability.
Testing Considerations The key areas of testing for Web applications beyond traditional testing include: ■■
Web UI implementation
■■
System integration
■■
Server and client installation
■■
Web-based help
■■
Configuration and compatibility
■■
Database
■■
Security
■■
Performance, load, and stress
37
04 201006 Ch02.qxd
38
5/29/03
8:57 AM
Page 38
Chapter 2
For definitions for these tests, see Chapter 3, “Software Testing Basics.” In addition, see Chapters 9 through 19 for in-depth discussions on these tests. Now that we have established the fact that testing Web applications is complex, our objective for the rest of the book is to offer you quick access to information that can help you meet the testing challenges. The materials are built upon some of the testing knowledge that you already have. We hope that you will find the information useful.
Bibliography Kaner, Cem, Jack Falk, Hung Q. Nguyen Testing Computer Software, 2nd edition. New York: John Wiley & Sons, Inc., 1999. LogiGear Corporation. QA Training Handbook: Testing Web Applications. Foster City, CA: LogiGear Corporation, 2002. ———QA Training Handbook: Testing Windows Desktop and Server-Based Applications. Foster City, CA: LogiGear Corporation, 2002. Microsoft Corporation. Microsoft SQL Server 2000 Resource Kit. Redmond, WA: Microsoft Press, 2001. Orfali, Robert, Dan Harkey, Jeri Edwards, Client/Server Survival Guide, Third Edition. New York: John Wiley & Sons, 1999. Reilly, Douglas J. Designing Microsoft ASP.NET Applications. Redmond, WA: Microsoft Press, 2001. ———Inside Server-Based Applications. Redmond, WA: Microsoft Press, 2000.
05 201006 PP02.qxd
5/29/03
8:57 AM
Page 39
PA R T
Two Methodology and Technology
05 201006 PP02.qxd
5/29/03
8:57 AM
Page 40
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 41
CHAPTER
3 Software Testing Basics
Why Read This Chapter? In general, the software testing techniques that are applied to other applications are the same as those that are applied to Web-based applications. Both cases require basic test types such as functionality tests, forced-error tests, boundary condition and equivalence class analysis, and so forth. Some differences between the two are that the technology variables in the Web environment multiply, and the additional focus on security- and performance-related tests, which are very different from feature-based testing. Having the basic understanding in testing methodologies, combined with a domain expertise in Web technology, will enable you to effectively test Web applications. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ Basic Planning and Documentation ◆ Common Terminology and Concepts ◆ Test-Case Development ◆ Bibliography
41
06 201006 Ch03.qxd
42
5/29/03
8:58 AM
Page 42
Chapter 3
Introduction This chapter includes a review of some of the more elemental software testing principles upon which this book is based. Basic testing terminology, practices, and test-case development techniques are covered. However, a full analysis of the theories and practices that are required for effective software testing is not a goal of this book. For more detailed information on the basics of software testing, please refer to Testing Computer Software (Kaner, Falk, and Nguyen, John Wiley & Sons, Inc., 1999). Another useful book that we reecommend is Lessons Learned in Software Testing (Kaner, Bach, and Pettichord, John Wiley & Sons, Inc., 2002).
Basic Planning and Documentation Methodical record-keeping builds credibility for the testing team and focuses testing efforts. Records should be kept for all testing. Complete test-case lists, tables, and matrices should be collected and saved. Chapter 7, “Test Planning Fundamentals,” details many practical reporting and planning processes. There are always limits to the amount of time and money that can be invested in testing. There are often scheduling and budgetary constraints on development projects that severely restrict testing—for example, adequate hardware configurations may be unaffordable. For this reason, it is important that cost justification, including potential technical support and outsourcing, be factored into all test planning. To be as efficient as possible, look for redundant test cases and eliminate them. Reuse test suites and locate preexisting test suites when appropriate. Become as knowledgeable as possible about the application under test and the technologies supporting that application. With knowledge of the application’s technologies, you can avoid wasted time and identify the most effective testing methods available. You can also keep the development team informed about areas of possible risk. Early planning is key to the efficiency and cost savings that can be brought to the testing effort. Time invested early in core functionality testing, for example, can make for big cost savings down the road. Identifying functionality errors early reduces the frequency that developers will have to make risky fixes to core functionality late in the development process when the stakes are higher. Test coverage (an assessment of the breadth and depth of testing that a given product will undergo) is a balance of risk and other project concerns, such as
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 43
Software Testing Basics
resources and scheduling (complete coverage is virtually impossible). The extent of coverage is a negotiable concept for which the product team will be required to give input.
Common Terminology and Concepts Following are some essential software testing terms and concepts.
Test Conditions Test conditions are critically important factors in Web application testing. The test conditions are the circumstances in which an application under test operates. There are two categories of test conditions, application-specific and environment-specific, which are described in the following text. 1. Application-specific conditions. An example of an application-specific condition includes running the same word processor spell-checking test while in Normal View and then again when in Page View mode. If one of the tests generates an error and the other does not, then you can deduce that there is a condition that is specific to the application that is causing the error. 2. Environment-specific conditions. When an error is generated by conditions outside of an application under test, the conditions are considered to be environment-specific. In general, we find it useful to think in terms of two classes of operating environments, each having its own unique testing implications: 1. Static environments (configuration and compatibility errors). An operating environment in which incompatibility issues may exist, regardless of variable conditions such as processing speed and available memory. 2. Dynamic environments (RAM, disc space, memory, network bandwidth, etc.). An operating environment in which otherwise compatible components may exhibit errors due to memory-related errors or latency conditions.
Static Operating Environments The compatibility differences between Netscape Navigator and Internet Explorer illustrate a static environment.
43
06 201006 Ch03.qxd
44
5/29/03
8:58 AM
Page 44
Chapter 3
Configuration and compatibility issues may occur at any point within a Web system: client, server, or network. Configuration issues involve various server software and hardware setups, browser settings, network connections, and TCP/IP stack setups. Figures 3.1 and 3.2 illustrate two of the many possible physical server configurations, one-box and two-box, respectively.
Dynamic Operating Environments When the value of a specific environment attribute does not stay constant each time a test procedure is executed, it causes the operating environment to become dynamic. The attribute can be anything from resource-specific (available RAM, disk space, etc.) to timing-specific (network latency, the order of transactions being submitted, etc.).
Client
Client Ethernet
PHYSICAL SERVER
Application Server
Physical Server Web Server
Database Server
Figure 3.1 One-box configuration.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 45
Software Testing Basics
PHYSICAL SERVER 2
Database Server Client
Physical Server
Client Ethernet
PHYSICAL SERVER 1
Application Server Physical Server
Web Server
Figure 3.2 Two-box configuration.
Resource Contention Example
Figure 3.3 and Table 3.1 illustrate an example of a dynamic environment condition that involves three workstations and a shared temp space. Workstation C has 400Mb of temporary memory space on it. Workstation A asks Workstation C if it has 200Mb of memory available. Workstation C responds in the affirmative. What happens though if, before Workstation A receives an answer to its request, Workstation B writes 300Mb of data to the temp space on Workstation C? When Workstation A finally receives the response to its request, it will begin writing 200Mb of data to Workstation C—even though there will be only 100Mb of memory available. An error condition will result.
45
06 201006 Ch03.qxd
46
5/29/03
8:58 AM
Page 46
Chapter 3
Shared Temp Space Workstation A
Workstation C Ethernet
Workstation B Figure 3.3 Resource contention diagram.
Test Types Test types are categories of tests that are designed to expose a certain class of error or verify the accuracy of related behaviors. The analysis of test types is a good way to divide the testing of an application methodically into logical and manageable groups of tasks. Test types are also helpful in communicating required testing time and resources to other members of the product team. Following are a number of common test types. See Chapter 7, “Test Planning Fundamentals,” and Chapter 9, “Sample Test Plan,” for information regarding the selection of test types.
Acceptance Testing The two common types of acceptance tests are development acceptance tests and deployment acceptance tests. Development Acceptance Test
Release acceptance tests and functional acceptance simple tests are two common classes of test used during the development process. There are subtle differences in the application of these two classes of tests.
3
Workstation B needs to write 400Mb 300Mb of data to the shared temp space on Workstation C. Workstation B asks Workstation C to give it the needed space. Workstation C tells Workstation B that it has the available memory space, and it reserves the space for Workstation B. Workstation B writes the data to Workstation C.
400Mb
BEFORE WORKSTATION C: SHARED TEMP SPACE AVAILABLE MEMORY
0Mb
100Mb
400Mb
AFTER WORKSTATION C: SHARED SPACE AVAILABLE MEMORY
8:58 AM
Workstation A finally gets its response from Workstation C and begins to write 200Mb of data. Workstation C, however, now has only 100Mb of temp space left. Without proper error handling, Workstation A crashes.
Workstation A needs to write 200Mb of data to the shared temp space on Workstation C. Workstation A asks Workstation C if the needed space is available. Workstation C tells Workstation A that it has the available memory space. Note that Workstation A did not reserve the space.
1
WORKSTATION B
5/29/03
2
WORKSTATION A
STEP
Table 3.1 Resource Contention Process
06 201006 Ch03.qxd Page 47
Software Testing Basics 47
06 201006 Ch03.qxd
48
5/29/03
8:58 AM
Page 48
Chapter 3 Release Acceptance Test
The release acceptance test (RAT), also referred to as a build acceptance or smoke test, is run on each development release to check that each build is stable enough for further testing. Typically, this test suite consists of entrance and exit test cases, plus test cases that check mainstream functions of the program with mainstream data. Copies of the RAT can be distributed to developers so that they can run the tests before submitting builds to the testing group. If a build does not pass a RAT test, it is reasonable to do the following: ■■
Suspend testing on the new build and resume testing on the prior build until another build is received.
■■
Report the failing criteria to the development team.
■■
Request a new build.
TESTING THE SAMPLE APPLICATION STATIC OPERATING ENVIRONMENT EXAMPLE This sample application illustrates incompatibility between a version of Netscape Navigator and a version of Microsoft Internet Explorer. (See Chapter 8, “Sample Application,” for more information.) The application has charting functionality that enables users to generate metrics reports, such as bar charts and line charts. When a user requests a metrics report, the application server pseudocode runs as follows: 1. Connects to the database server and runs the query. 2. Writes the query result to a file named c:\temp\chart.val. 3. Executes the chart Java applet. Reads and draws a graph using data from c:\temp\chart.val. 4. Sends the Java applet to the browser. During testing of the sample application, it was discovered that the charting feature works on one of the preceding configurations but not the other. The problem occurred only in the two-box configuration. After examining the code, it was learned that the problem was in steps 2 and 3. In step 2, the query result is written to c:\temp\chart.val of the database server local drive. In step 3, the chart Java applet is running on the application server, which is not in the same box as the database server. When the database server attempts to open the file c:\temp\chart.val on the application server local drive, the file is not found. It should not be inferred from this example that you should read the code every time you come across an error—leave the debugging work for the developers. It is essential, however, to identify which server configurations are problematic and include such information in bug reports. You should consider running a cursory suite of test cases on all distributed configurations that are supported by the application server under test. You should also consider replicating every bug on at least two configurations that are extremely different from each other when configuration dependency is suspect.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 49
Software Testing Basics
Consider the compatibility issues involved in the following example: ◆ The home directory path for the Web server on the host myserver is
mapped to C:\INETPUB\WWWROOT\. ◆ When a page is requested from http://myserver/, data is pulled from
C:\INETPUB\WWWROOT\. ◆ A file name, mychart.jar, is stored at C:\INETPUB\WWWROOT
\MYAPP\BIN. ◆ The application session path (relative path) points to C:\INETPUB
\WWWROOT\MYAPP\BIN, and a file is requested from .\LIB. Using a version of Internet Explorer, the Web server looks for the file in C:\INETPUB\WWWROOT\MYAPP\BIN\LIB, because the browser understands relative paths. This is the desired behavior, and the file will be found in this scenario. Using a version of Netscape Navigator, which uses absolute paths, the Web server looks for the file in C:\INETPUB\WWWROOT\LIB. This is a problem because the file (mychart.jar) will not be found. The feature does not work with this old version of Netscape Navigator (which some people still use). Bringing up the Java Console, you can see the following, which confirms the finding: #Unable to load archive http://myserver/lib/mychart.jar:java.io. IOException:. This is not to say that Internet Explorer is better than Netscape Navigator. It simply means that there are incompatibility issues between browsers. Code should not assume that relative paths work with all browsers.
Functional Acceptance Simple Test
The functional acceptance simple test (FAST) is run on each development release to check that key features of the program are appropriately accessible and functioning properly on at least one test configuration (preferably the minimum or common configuration). This test suite consists of simple test cases that check the lowest level of functionality for each command to ensure that task-oriented functional tests (TOFTs) can be performed on the program. The objective is to decompose the functionality of a program down to the command level and then apply test cases to check that each command works as intended. No attention is paid to the combination of these basic commands, the context of the feature that is formed by these combined commands, or the end result of the overall feature. For example, FAST for a File/Save As menu command checks that the Save As dialog box displays. However, it does not validate that the overall file-saving feature works, nor does it validate the integrity of saved files.
49
06 201006 Ch03.qxd
50
5/29/03
8:58 AM
Page 50
Chapter 3
Typically, errors encountered during the execution of FAST are reported through the standard issue-tracking process. Suspending testing during FAST is not recommended. Note that it depends on the organization for which you work. Each might have different rules in terms of which test cases should belong to RAT versus FAST, and when to suspend testing or to reject a build. Deployment Acceptance Test
The configurations on which the Web system will be deployed will often be much different from develop-and-test configurations. Testers must consider this in the preparation and writing of test cases for installation time acceptance tests. This type of test usually includes the full installation of the applications to the targeted environments or configurations. Then FASTs and TOFTs are executed to validate the system functionality.
Feature-Level Testing This is where we begin to do some serious testing, including boundary testing and other difficult but valid test circumstances. BUG ANALYZING AND REPRODUCTION TIPS To reproduce an environment-dependent error, you must replicate both the exact sequence of activities and the environment conditions (e.g., operating system, browser version, add-on components, database server, Web server, third-party components, client-server resources, network bandwidth, and traffic, etc.) in which the application operates. Environment-independent errors are easier to reproduce, as they do not require replicating the operating environment. With environment-independent errors, all that need to be replicated are the steps that generate the error. BROWSER BUG ANALYZING TIPS ◆ Check if the client operating system (OS) version and patches meet system requirements. ◆ Check if the correct version of the browser is installed on the client machine. ◆ Check if the browser is properly installed on the machine. ◆ Check the browser settings. ◆ Check if the bug is reproducible in different browsers (e.g., Netscape Navigator versus Internet Explorer). ◆ Check with different supported versions of the same browsers (e.g., Internet Explored version 4.1, 4.2, 5.0, 5.5, etc.).
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 51
Software Testing Basics Task-Oriented Functional Test
The task-oriented functional test (TOFT) consists of positive test cases that are designed to verify program features by checking that each feature performs as expected against specifications, user guides, requirements, and design documents. Usually, features are organized into list or test matrix format. Each feature is tested for: ■■
The validity of the task it performs with supported data conditions under supported operating conditions.
■■
The integrity of the task’s end result.
■■
The feature’s integrity when used in conjunction with related features.
Forced-Error Test
The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages that the program issues should be generated. The list is used as a baseline for developing test cases. An attempt is made to generate each error message in the list. Obviously, tests to validate error-handling schemes cannot be performed until all the handling and error messages have been coded. However, FETs should be thought through as early as possible. Sometimes, the error messages are not available. Nevertheless, error cases can still be considered by walking through the program and deciding how the program might fail in a given user interface (UI), such as a dialog or in the course of executing a given task or printing a given report. Test cases should be created for each condition to determine what error message is generated (if any). USEFUL FET EXECUTION GUIDELINES ■■
Check that the error-handling design and the error communication methods are consistent.
■■
Check that all common error conditions are detected and handled correctly.
■■
Check that the program recovers gracefully from each error condition.
■■
Check that the unstable states of the program (e.g., an open file that needs to be closed, a variable that needs to be reinitialized, etc.) caused by the error are also corrected.
■■
Check each error message to ensure that: ■■
Message matches the type of error detected.
■■
Description of the error is clear and concise.
■■
Message does not contain spelling or grammatical errors.
■■
User is offered reasonable options for getting around or recovering from the error condition.
51
06 201006 Ch03.qxd
52
5/29/03
8:58 AM
Page 52
Chapter 3 Boundary Test
Boundary tests are designed to check a program’s response to extreme input values. Extreme output values are generated by the input values. It is important to check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary results from nonextreme input values. It is essential to analyze how to generate extremes of both types. In addition, sometimes you know that there is an intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or overflow condition. System-Level Test
System-level tests consist of batteries of tests that are designed to fully exercise a program as a whole and check that all elements of the integrated system function properly. System-level test suites also validate the usefulness of a program and compare end results against requirements. Real-World User-Level Test
These tests simulate the actions customers may take with a program. Real-world user-level testing often detects errors that are otherwise missed by formal test types. Exploratory Test
Exploratory tests do not necessarily involve a test plan, checklists, or assigned tasks. The strategy here is to use past testing experience to make educated guesses about places and functionality that may be problematic. Testing is then focused on those areas. Exploratory testing can be scheduled. It can also be reserved for unforeseen downtime that presents itself during the testing process. Load/Volume Test
Load/volume tests study how a program handles large amounts of data, excessive calculations, and excessive processing, often over a long period of time. These tests do not necessarily have to push or exceed upper functional limits. Load/volume tests can, and usually must be, automated. FOCUS OF LOAD/VOLUME TESTING ■■
Pushing through large amounts of data with extreme processing demands
■■
Requesting many processes simultaneously
■■
Repeating tasks over a long period of time
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 53
Software Testing Basics
Load/volume tests, which involve extreme conditions, are normally run after the execution of feature-level tests, which prove that a program functions correctly under normal conditions. See Chapter 19, “Performance Testing,” for more information on planning for these tests. Stress Test
Stress tests force programs to operate under limited resource conditions. The goal is to push the upper functional limits of a program to ensure that it can function correctly and handle error conditions gracefully. Examples of resources that may be artificially manipulated to create stressful conditions include memory, disk space, and network bandwidth. If other memoryoriented tests are also planned, they should be performed here as part of the stress test suite. Stress tests can be automated. See Chapter 19 for more information on planning for these tests. Performance Test
The primary goal of performance testing is to develop effective enhancement strategies for maintaining acceptable system performance. Performance testing is a capacity analysis and planning process in which measurement data are used to predict when load levels will exhaust system resources. The testing team should work with the development team to identify tasks to be measured and to determine acceptable performance criteria. The marketing group may even insist on meeting a competitor’s standards of performance. Test suites can be developed to measure how long it takes to perform relevant tasks. See Chapter 19 for more information on planning for these tests. Fail-over Test
Fail-over testing involves putting the system under test in a state of failure to trigger the predesigned system-level error handling and recovering processes. These processes might be automatic recovery through a restart, or redirection to a back-up system or another server. Availability Test
Availability testing measures the probability to which a system or component is operational and accessible, sometimes known as uptime. This testing involves not only putting the system under a certain load or condition but also analyzing the components that may fail and developing test scenarios that may cause them to fail. In availability testing you may devise a scenario of running transactions to bring down the server and make it unavailable, thereby initiating the built-in recovery and standby systems. Reliability Test
Reliability testing is similar to availability testing, but reliability infers operational availability under certain conditions, over some fixed duration of time,
53
06 201006 Ch03.qxd
54
5/29/03
8:58 AM
Page 54
Chapter 3
for example, 48 or 72 hours. Reliability testing is sometimes known as soak testing. Here you are testing the continuous running of transactions, looking for memory leaks, locks, or race condition errors. If the system stays up or properly initiates the fail-over process, it passes the test. Reliability testing would mean running those low system resource tests over, perhaps, 72 hours, looking not for the system response time when the low resource condition is detected but what happens if it stays in that condition for a long time. Scalability Testing
Testing how well the system will scale or continue to function with growth, without having to switch to a new system or redesigning, is the goal of scalability testing. Client-server systems such as Web systems often will grow to support more users, more activities, or both. The idea is to support the growth by adding processors and memory to the system or server-side hardware, without changing system software, which can be expensive. API Test
An application program interface (API) is a set of input interfaces for functions and procedures that are made available by application or operating system components that export them, and for other application and operating system components that use or import them. Where the graphical user interface (GUI) receives requests from the user, the API receives requests from software components. Because there is no direct access to the API though GUI, a harness application must be built to provide the necessary access and implementation of test-case execution needed for API testing. These harnesses are most often built by developers. Exercising the API involves designing test cases that call the function with certain selected parameters, calling the function with many possible combinations of parameters, and designing the sequences in which the function is called to test functionality and error conditions. In addition, due to the labor-intensive nature of API testing, caused by the vast number of possible test cases, you should consider test automation architecture and methods to effectively test the software at the API level. Regression Test
Regression testing is used to confirm that fixed bugs have, in fact, been fixed, that new bugs have not been introduced in the process, and that features that were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be performed once per milestone or once per build. Some bug regression testing may also be performed during each acceptance test cycle, focusing on only the most important bugs. Regression tests can be automated.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 55
Software Testing Basics
CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN Issue-fixing cycle. Once the development team has fixed issues, a regression test can be run to validate the fixes. Tests are based on the step-bystep test cases that were originally reported. ■■
If an issue is confirmed as fixed, then the issue report status should be changed to Closed.
■■
If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to report the side effect.
■■
If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, supplemented with comments outlining the outstanding problems. Note: Your company might be using different terminology other than Closed or Unfixed.
Open-status regression cycle. Periodic regression tests may be run on all open issues in the issue-tracking database. During this cycle, issue status is confirmed as one of the following: the report is reproducible as-is with no modification; the report is reproducible with additional comments or modifications; or the report is no longer reproducible. Closed-fixed regression cycle. In the final phase of testing, a fullregression test cycle should be run to confirm the status of all fixedclosed issues. Feature regression cycle. Each time a new build is cut or is in the final phase of testing, depending on the organizational procedure, a fullregression test cycle should be run to confirm that the proven correctly functional features are still working as expected. Compatibility and Configuration Test
Compatibility and configuration testing is performed to check that an application functions properly across various hardware and software environments. Often, the strategy is to run FASTs or a subset of TOFTs on a range of software and hardware configurations. Sometimes, another strategy is to create a specific test that takes into account the error risks associated with configuration differences. For example, you might design an extensive series of tests to check for browser compatibility issues. You might not run these tests as part of your normal RATs, FASTs, or TOFTs.
55
06 201006 Ch03.qxd
56
5/29/03
8:58 AM
Page 56
Chapter 3
Software compatibility configurations include variances in OS versions, input/output (I/O) devices, extensions, network software, concurrent applications, online services, and firewalls. Hardware configurations include variances in manufacturers, CPU types, RAM, graphic display cards, video capture cards, sound cards, monitors, network cards, and connection types (e.g., T1, DSL, modem, etc.). Documentation Test
Testing of reference guides and user guides check that all features are reasonably documented. Every page of documentation should be keystroke-tested for the following errors: ■■
Accuracy of every statement of fact.
■■
Accuracy of every screen shot, figure, and illustration.
■■
Accuracy of placement of figures and illustrations.
■■
Accuracy of every tutorial, tip, and instruction.
■■
Accuracy of marketing collateral (claims, system requirements, and screen shots).
■■
Accuracy of downloadable documentation (PDFs, HTML, or text files).
Online Help Test
Online help tests check the accuracy of help contents, correctness of features in the help system, and functionality of the help system. See Chapter 15 “Help Tests,” for more information. Utilities/Toolkits and Collateral Test
If there are utilities and software collateral items to be tested, appropriate analysis should be done to ensure that suitable and adequate testing strategies are in place. Install/Uninstall Test
Web systems often require both client-side and server-side installs. Testing of the installer checks that installed features function properly—including icons, support documentation, the README file, configuration files, and registry keys (in Windows operating systems). The test verifies that the correct directories have been created and that the correct system files have been copied to the appropriate directories. The test also confirms that various error conditions have been detected and handled gracefully. Testing of the uninstaller checks that the installed directories and files have been appropriately removed, that configuration and system-related files have also been appropriately removed or modified, and that the operating environment has been recovered in its original state. See Chapter 16, “Installation Tests,” for more information.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 57
Software Testing Basics User Interface Tests
Ease-of-use UI testing evaluates how intuitive a system is. Issues pertaining to navigation, usability, commands, and accessibility are considered. User interface functionality testing examines how well a UI operates to specifications. AREAS COVERED IN UI TESTING ■■
Usability
■■
Look and feel
■■
Navigation controls/navigation bar
■■
Instructional and technical information style
■■
Images
■■
Tables
■■
Navigation branching
■■
Accessibility
Usability Tests
Usability testing consists of a variety of methods for setting up the product, assigning the users tasks to carry out, having the users to carry out the tasks, and observing users interacting and collecting information to measure ease of use or satisfaction. See the “Usability and Accessibility Testing” section in Chapter 11, “Functional Tests,” for more information. Accessibility Tests
In producing a Web site for accessibility, the designer must take into consideration that the Web content must be available to and accessible by everyone, including people with disabilities. Accessibility testing is done to verify that the application meets the accessibility standards and practices. The goal of accessibility is similar to that of usability: that is, to ensure the end user will get the best experience in interacting with the product or service. The key difference is that accessibility accomplishes its goal through making the product usable to a larger population, including people with disabilities. External Beta Testing
External beta testing offers developers their first glimpse at how users may actually interact with a program. Copies of the program or a test URL, sometimes accompanied by a letter of instruction, are sent out to a group of volunteers who try out the program and respond to questions in the letter. Beta testing is blackbox, real-world testing. However, beta testing can be difficult to manage, and the feedback that it generates normally comes too late in the development process to contribute to improved usability and functionality. External beta-tester feedback may be reflected in a README file or deferred to future releases.
57
06 201006 Ch03.qxd
58
5/29/03
8:58 AM
Page 58
Chapter 3 Dates Testing
A program’s ability to handle the year change from 1999 to 2000 is tested to ensure that internal systems were not scrambled on dates that are later than December 31, 1999, such as in Y2K testing. However, Y2K-related considerations will remain an issue well beyond the year 2000 due to future leap-year and business-calendar changeovers. Also, programs compiled in C and C++ as well as 32-bit UNIX systems will rollover on January 19, 2038. Rollover meaning the day after 2038 will be 1901. Many computer scientists predict this problem to be much more serious than the Y2K problem. Security Tests
Security measures protect Web systems from both internal and external threats. Security testing is done to determine if the application features have been implemented as designed. Within the context of software testing, the focus of the work is on functional tests, forced-error tests, and to a certain extent, penetration tests at the application level. It means that you should seek out vulnerabilities and information leaks due primarily to programming practices and, to a certain extent, to misconfiguration of Web servers and other application-specific servers. Test for the security-side effects or vulnerabilities caused by the functionality implementation. At the same time, test for functional side effects caused by security implementation. See Chapter 18, “Web Security Testing,” for more information on planning and testing for security. PRIMARY COMPONENTS REQUIRING SECURITY TESTING ■■
Client and server software, Databases and software components
■■
Server boxes
■■
Client workstations
■■
Networks
Unit Tests
Unit tests are tests that evaluate the integrity of software code units before they are integrated with other software units. Developers normally perform unit testing. Unit testing represents the first round of software testing, when developers test their own software and fix errors in private.
Phases of Development The software development process is normally divided into phases. Each phase of development entails different test types, coverage depth, and demands on the testing effort. Refer to Chapter 7, Table 7.1, “Test Types and Their Place in the Software Development Process,” for a visual representation of test phases and corresponding test types.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 59
Software Testing Basics
Development phases should be defined by agreed-upon, clearly communicated and measurable criteria. Often, people on the same development team will have different understandings of how particular phases are defined. For example, one phase might be defined such that an application cannot officially begin its beta phase of development until all crash or data-loss bugs have been fixed. Alternatively, beta is also commonly defined as being a product that is functionally complete (though bugs may still be present, all features have been coded). Disagreement over how a phase is defined can lead to problems in perception of completeness and product stability. It is often the role of the test team to define the milestone or completion criteria that must be met for a project to pass from one phase to another. Defining and agreeing upon milestone and completion criteria allows the testing, development, and marketing groups to work more effectively as a team. The specifics of the milestones are not as important as the fact that they are clearly communicated. It is also a concern that developers usually consider that they have reached a milestone when the build is done. In practice, testing still must confirm that this is true, and the confirmation process may take from a few days to a few weeks. COMMON EXAMPLES OF PHASES OF SOFTWARE DEVELOPMENT Alpha. A significant and agreed-upon portion (if not all) of the product has been completed (the product includes code, documentation, additional art, or other content, etc.). The product is ready for in-house use. Pre-beta (or beta candidate). A build that is submitted for beta acceptance. If the build meets the beta criteria (as verified by the testing group), then the software is accepted into the beta phase of development. Beta. Most, or all, of the product is complete and stable. Some companies send out review copies (beta copies) of software to customers once software reaches this phase. UI freeze. Every aspect of the application’s UI is complete. Some companies accept limited changes to error messaging and repairs to errors in help screens during this phase. Prefinal (or golden master candidate (GMC)). A final candidate build has been submitted for review to the testing team. If the software is complete and all GMC tests are passed, then the product is considered ready for final testing. Final test. This is the last round of testing before the product is migrated to the live Web site, sent to manufacturing, or posted on the Web site. Release (or golden master). The build that will eventually be shipped to the customer, posted on the Web, or migrated to the live Web site.
59
06 201006 Ch03.qxd
60
5/29/03
8:58 AM
Page 60
Chapter 3
OTHER SOFTWARE TESTING TERMS Test case. A test that (ideally) executes a single well-defined test objective (e.g., a specific behavior of a feature under a specific condition). Early in testing, a test case might be extremely simple; later, however, the program is more stable, so you will need more complex test cases to provide useful information. Test script. Step-by-step instructions that describe how a test case is to be executed. A test script may contain one or more test cases. Test suite. A collection of test scripts or test cases used for validating bug fixes (or finding new bugs) within a logical or physical area of a product. For example, an acceptance test suite contains all the test cases that are used to verify that software has met certain predefined acceptance criteria. A regression suite, on the other hand, contains all the test cases that are used to verify that all previously fixed bugs are still fixed. Test specification. A set of test cases, input, and conditions that are used in the testing of a particular feature or set of features. A test specification often includes descriptions of expected results. Test requirement. A document that describes items and features that are tested under a required condition. Test plan. A management document outlining risks, priorities, and schedules for testing. (See Part Three for more information.)
Test-Case Development There are many methods available for analyzing software in an effort to develop appropriate test cases. The following subsections focus on several methods of establishing coverage and developing effective test cases. By no means is this a complete and comprehensive list of test case design methods. It’s merely a means for us to share the methods that we actually apply and find useful in our day-to-day testing activities. A combination of most, if not all, of the following test design methods should be used to develop test cases for the application under test.
Equivalence Class Partitioning and Boundary Condition Analysis Equivalence class partitioning is a timesaving practice that identifies tests that are equivalent to one another; when two inputs are equivalent, you expect them to cause the identical sequence of operations to take place or to cause the
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 61
Software Testing Basics
same path to be executed through the code. When two or more test cases are seen as equivalent, the resource savings associated with not running the redundant tests normally outweigh the risk. An example of an equivalence class includes the testing of a data-entry field in an HTML form. If the field accepts a five-digit zip code (e.g., 22222), then it can reasonably be assumed that the field will accept all other five-digit ZIP codes (e.g., 33333, 44444, etc.). Because all five-digit zip codes are of the same equivalence class, there is little benefit in testing more than one of them. In equivalence partitioning, both valid and invalid values are treated in this manner. For example, if entering six letters into the zip code field just described results in an error message, then it can reasonably be assumed that all six-letter combinations will result in the same error message. Similarly, if entering a four-digit number into the zip code field results in an error message, then it should be assumed that all four-digit combinations will result in the same error message. EXAMPLES OF EQUIVALENCE CLASSES ■■
Ranges of numbers (such as all numbers between 10 and 99, which are of the same two-digit equivalence class)
■■
Membership in groups (dates, times, country names, etc.)
■■
Invalid inputs (placing symbols into text-only fields, etc.)
■■
Equivalent output events (variation of inputs that produce the same output)
■■
Equivalent operating environments
■■
Repetition of activities
■■
Number of records in a database (or other equivalent objects)
■■
Equivalent sums or other arithmetic results
■■
Equivalent numbers of items entered (such as the number of characters entered into a field)
■■
Equivalent space (on a page or on a screen)
■■
Equivalent amounts of memory, disk space, or other resources available to a program
Boundary values mark the transition points between equivalence classes. They can be limit values that define the line between supported inputs and nonsupported inputs, or they can define the line between supported system requirements and nonsupported system requirements. Applications are more susceptible to errors at the boundaries of equivalence classes, so boundary condition tests can be quite effective at uncovering errors.
61
06 201006 Ch03.qxd
62
5/29/03
8:58 AM
Page 62
Chapter 3
Generally, each equivalence class is partitioned by its boundary values. Nevertheless, not all equivalence classes have boundaries. For example, given the following four browser equivalent classes (Netscape Navigator 4.6 and 4.6.1, and Microsoft Internet Explorer 4.0 and 5.0), there is no boundary defined among the classes. Each equivalence class represents potential risk. Under the equivalent class approach to developing test cases, at most nine test cases should be executed against each partition. Figure 3.4 illustrates how test cases can be built around equivalence class partitions. In Figure 3.4, LB stands for lower boundary and UB stands for upper boundary. The test cases include three tests clustered around each of the boundaries: one test that falls within the partition’s boundaries, and two tests that fall well beyond the boundaries. Figure 3.5 illustrates another boundary condition test-case design example taken from the sample application described in Chapter 8. To develop test cases via equivalence class partitioning and boundary class analysis, you must do the following: ■■
Identify the equivalence classes.
■■
Identify the boundaries.
■■
Identify the expected output(s) for valid input(s).
■■
Identify the expected error handling (ER) for invalid inputs.
■■
Generate a table of test cases (maximum of nine for each partition).
Note that this example is oversimplified; it indicates only two equivalent classes. In reality, there are many other equivalent classes, such as invalid character class (nonalphanumeric characters), special cases such as numbers with decimal points, leading zeros, or leading spaces, and so on. Chapter 11 contains additional information regarding boundary analysis.
Software Testing Basics Valid Values: 1 to 9999 or 1 to 4-digit value
Figure 3.5 Sample application test cases.
State Transition State transition involves analysis of the transitions between an application’s states, the events that trigger the transitions, and the results of the transitions. This is done by using a model of the application’s expected behaviors. A useful resource available on the Internet that contains both articles and links is Harry Robinson’s Model-Based Testing Page, www.model-based-testing.org. This Web site contains several articles and useful links on the topic of modelbased testing. GENERAL STEPS FOR STATE TRANSITION TEST-DESIGN ANALYSIS 1. Model, or identify all of an application’s supported states. See Figures 3.6 and 3.7. 2. For each test case, define the following: ■■
The starting state
■■
The input events that cause the transitions
■■
The output results or events of each transition
■■
The end state
3. Build a diagram connecting the states of the application based on the expected behavior. This model is called a state diagram. This diagram illustrates the relationships between the states, events, and actions of the application. See Figure 3.8. 4. Generate a table of test cases that addresses each state transition. See Figure 3.9.
63
06 201006 Ch03.qxd
64
5/29/03
8:58 AM
Page 64
Chapter 3 Navigation Command
Current View Mode
Figure 3.6 Edit View state.
Navigation Command
Figure 3.7 Full View state.
Current View Mode
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 65
Software Testing Basics
FV
Edit View FV Record [1st] P N
L L
Edit View Record [Last]
Edit View Record [1st + 1] Full View Record [1st]
F
Full View Record [x]
Scroll to X
EV F
LK
F
EV F
FV
Edit View Record [x – 1]
N P
Edit View Record [x]
P N
L
Edit View FV Record [Last – 1]
L
Figure 3.8 Transitions diagram.
NAVIGATION COMMAND (Event)
Code VIEW MODE (State) a
Edit View-Record [1st]
Edit View displaying the 1st record
F
First
b
Edit View-Record [1st + 1]
Edit View displaying the 2nd record
P
Previous
c
Edit View-Record [x]
Edit View displaying the record [x]
N
Next
d
Edit View-Record [x - 1]
Edit View displaying the record [x - 1]
L
Last
e
Edit View-Record [Last]
Edit View displaying the last record
FV
Full View
f
Edit View-Record [Last - 1]
Edit View displaying the next to last record
g
Full View Record [1st]
Full View displaying the 1st record
EV
Edit View
h
Full View Record [x]
Full View displaying the record [x]
LK
Record to ID Link
TEST CASE NO. Start View Mode Navigation Command (input) End View Mode
1 a N b
2 3 a a L FV e g
4 b F a
5 b P a
6 7 b b L FV e g
8 c F a
9 10 11 12 13 14 15 c c c d d d d P L FV F N L FV d e g a c e g
16 17 18 19 20 21 22 23 24 25 TEST CASE NO. e e e f f f f g h h Start View Mode F P FV F N L FV EV EV LK Navigation Command (input) a f g a e e g a a c End View Mode Figure 3.9 Test matrix.
65
06 201006 Ch03.qxd
66
5/29/03
8:58 AM
Page 66
Chapter 3 TESTING THE SAMPLE APPLICATION Figures 3.6 and 3.7 show two different states that are available within the sample application. (See Chapter 8 for details regarding the sample application.) Figure 3.6 shows the application in Edit View mode. Available navigation options from this state include Full View, First, Previous, Next, and Last. Figure 3.7 shows the application in Full View. Available navigation options from this state include Edit View and the Report Number hyperlink. Figure 3.8 diagrams the transitions, events, and actions that interconnect these two states. Figure 3.9 is a table of test cases that targets each of the transition states. Each test case has a beginning state (Start View Mode), an event or input (Navigation Command), and an event (End View Mode).
Use Cases A use case is a model of how a system is being used. It is a text description often accompanied by a graphic representation of system users, called actors, and the use of the system, called actions. Use cases usually include descriptions of system behavior when the system encounters errors. A typical use case might read: ■■
An Internet surfer reads reviews for movies in a movie-listing database.
■■
The surfer searches for a movie by name.
■■
The surfer searches for theaters showing that movie.
Today, use cases, rightly or wrongly, are most commonly used as requirements from which testers are told to build test cases. Opinions differ greatly as to what a use case is and is not, but this is not the forum for this discussion. Our job is to prepare you for the inevitable task of developing a test strategy or test cases when someone on the team says, “We are now using use cases to aid our development and for you to build your test cases.” A large part of the high growth of the use case method has been the adoption of OMG’s UML (Unified Modeling Language) that employs use cases. On Rational Corporation’s Web site (www.rational.com/products/whitepapers /featucreqom.pdf) is an excellent white paper, written by Dean Leffingwell, called “Features, Requirements, Use Cases, Oh My!” Though it does not deal with testing issues, it does define use cases and requirements for UML, and puts these items in the context of the software development life cycle. Use cases describe the functional behavior of the system; they do not capture the nonfunctional requirements or the system design, so there must be other documentation to build thorough test cases.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 67
Software Testing Basics
Use cases generally contain a use case name, scope, or purpose of the use case; the actor executing the action; preconditions for the scenario; postconditions; extensions, sometimes called secondary scenarios; and uses or extends, meaning alternative paths and exceptions giving a description of some error conditions. More detailed use cases might detail the normal course of events as the step-by-step actions on the system. There are two excellent templates to study for this purpose: ■■
Karl Wiegers from ProcessImpact.com at: www.processimpact.com /process_assets/use_case_template.doc
■■
Alistar Cockburn’s use case template, at: http://members.aol.com /acockburn/papers/uctempla.htm
The degree to which use cases are helpful in designing test cases differs, as usual, according to the author of the use case and how formally or informally the author chose to write them. As with any development documentation, thoroughly detailed use cases can become difficult to maintain and quickly become outdated. Informal use cases may have a longer life, but they often lack adequate information from which to develop detailed test cases. Since use cases describe how a system is being used, rather than how it is built, they can be a great asset in developing real-world test cases. From them, we can derive process flow, path, functional, and exception-handling information. Well-written use cases contain, at least, precondition, postcondition, and exception information needed for test case development. Use cases generally contain neither UI-specific nor nonfunctional system information; this information needed for test cases must come from other sources. You can gain insight from use cases, such as an understanding of which features different users will be using and the order in which they will be used. Often, we expect users to enter at a given point. With Web applications, users may be able to access an application through many points. In event-driven systems, learning about “real-world” activities allows us to model system usage. Real-world use cases refers not just to the human users but involves modeling other systems with which the application under test will interact. From a security perspective, add cases for unauthorized use. GENERAL STEPS FOR USE-CASE TEST-DESIGN ANALYSIS 1. Gather all use cases for the area under test. 2. Analyze these use cases to discover the flow of the intended functionality. 3. Analyze each use case based on its normal course of events. 4. Analyze each use case based on secondary scenarios, exceptions, and extends. 5. Identify additional test cases that might be missing.
67
06 201006 Ch03.qxd
68
5/29/03
8:58 AM
Page 68
Chapter 3
Example Test Cases from Use Cases The following example illustrates taking two use cases from a large set of possible use cases to describe a system before developing functional and forced error handling (FET) test cases. The subject of this example is a Web application containing lists of movies, their directors, lead actor/actress, and information on the movie for review. The movie database can be written to and updated by “content editors.” The database can be searched or read by users of the Web site. The focus in this example is twofold: an editor writing to the database and an Internet surfer searching the database (see Figure 3.10 for the use case diagram and Figures 3.11 and 3.12 for the use case narrative).
Online Movie Database
Add new movie Content Editor
Search by movie Internet Surfer Figure 3.10 Block diagram of online movie database.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 69
Software Testing Basics Use Case ID:
1.1
Use Case Name:
Search by Movie
Created By:
JW Gibb
Last Updated By:
JW Gibb
Date Created:
5/28/2002
Date Last Updated:
5/28/2002
Actor:
Internet Surfer
Description — Purpose of Case
Describes the process of searching by movie name.
Preconditions
The user is somewhere in the Online Movie Database site.
Postconditions
The Search Results page has been displayed.
Normal Course of Events
1. The user clicks on the Search button. The system displays the Search page. 2. The user enters the Movie name in the Search Text field, then selects Search By Movie from the list of options, then clicks the Search button. The system displays the Search Results page with a list of all the movies that match the search text. 3. Search engine checks for misspellings and variations on title. In order to find “The Lord of the Rings” from “lord of rings.”
Secondary Scenarios
1.1.SS.1: In step 2, the user enters the name of a movie that is not in the movie database. The system displays the Search Results page with a message indicating that the movie specified does not exist in the movie database.
Exceptions
1.1.EX.1: In step 2, the user does not enter any text in the Search Text field. The system displays an alert message telling the user that the Search Text field cannot be blank.
Uses/Extends Figure 3.11 Use case for Internet surfer.
69
06 201006 Ch03.qxd
70
5/29/03
8:58 AM
Page 70
Chapter 3 Use Case ID:
2.1
Use Case Name:
Add a new movie
Created By:
JW Gibb
Last Updated By:
JW Gibb
Date Created:
5/28/2002
Date Last Updated:
5/28/2002
Actor:
Content Editor
Description — Purpose of Case
This use case describes the process of adding a new movie to the movie database.
Preconditions
The user is logged in to the content editing interface of the site.
Postconditions
A new movie has been added to the movie database. The system takes the Editor to the Add Movie Actor page.
Normal Course of Events
1. The user clicks the Add a New Movie button. The system displays the Add New Movie page. 2. The user enters the name of the movie in the Movie Name field, then enters the name of the director in the Director field, then enters the year the movie was made in the Year field, then clicks the Save button. The system saves the movie information, then displays the Add Movie Actors page.
Secondary Scenarios
2.1.SS.1: Enter a movie that is already in the database.
Exceptions
2.1.EX.1: In step 2, the user leaves either the Movie Name or Director field blank. The system displays an alert message telling the user to enter text in the appropriate fields. 2.1.EX.2: In step 2, the user enters a movie name and year combination that already exists in the database. The system prompts the user that there is already an entry for the movie in that particular year.
Uses/Extends Figure 3.12 Use case for content editor.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 71
Software Testing Basics
Test Cases Built from Use Cases From the preceding use cases, the tester could produce the following test cases for the Internet surfer (see Figure 3.13).
Use Case ID:
1.1
Use Case Name:
Search by Movie
Path or Scenario:
Search
Test Case Number
1
2
3
4
Initial Condition (Preconditions)
Any Movie site page with Search function available
Any Movie site page with Search function available
Any Movie site page with Search function available
Any Movie site page with Search function available
Actor
Internet Surfer
Internet Surfer
Internet Surfer
Internet Surfer
Action
Enter a Movie Enter a name that nonexistent exists in the movie name. Movie Database.
Enter extended characters in movie name Text field.
Leave Search text field blank.
Expected Results (Postconditions)
Correct Search No Movie results displayed found Search result text
Some Error Message – ask Developer for exact message.
Error Message # 47
Pass/Fail/Blocked Defect Number Notes
Test Data: Â, ß, á ¡Llegó la Novena
(continued)
71
06 201006 Ch03.qxd
72
5/29/03
8:58 AM
Page 72
Chapter 3
Test Case Number
5
6
7
Initial condition (Preconditions)
Any Movie site page with Search function available
Any Movie site page with Search function available
Any Movie site page with Search function available
Actor
Internet Surfer
Internet Surfer
Internet Surfer
Action
Enter too many characters
Enter illegal characters in Movie Name text field.
Enter maximum 1 character in search.
Expected Results (Postconditions)
Unknown result
Some Error Message ask developer for exact message.
Unknown result
Not detailed in Use Case 1.1
Test Data: ,º,¶
Don’t know Maximum. Not in Use Case 1.1
Pass/Fail/Blocked Defect Number Notes
Figure 3.13 Internet surfer search test cases from Search by Movie use case.
A test case for adding a movie to the database can also be created, as shown in Figure 3.14.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 73
Software Testing Basics
Use Case ID:
2.1
Use Case Name:
Add a Movie
Path or Scenario:
Edit Content
Test Case Number
1
2
3
Initial Condition (Preconditions)
User is on Content Edit main page.
User is on Content Edit main page.
User is on Content Edit main page.
Actor
Content Editor
Content Editor
Content Editor
Action
Click to add a movie page.
Click to add a movie page.
Click to add a movie page.
Enter movie name, director, year.
Enter movie name, director, no year.
Enter movie name, no director, year.
Click Save.
Click Save.
Click Save.
Movie added to database. Test by SQL query or search through UI. Page is Add Actor page.
Movie added to Database. Test by SQL query or search through UI. Page is Add Actor page.
Error Message #52- “Please add director name.”
Expected Results (Postconditions)
Pass/Fail/Blocked Defect Number Notes
Design Issue: Should a movie be accepted without a year?
Test Case Number
4
5
6
Initial Condition (Preconditions)
User is on Content Edit main page.
User is on Content Edit Main Page.
User is on Content Edit main page.
Actor
Content Editor
Content Editor
Content Editor
Action
Click to add a movie page.
Click to add a movie page.
Click to add a movie page.
Enter no Movie Name (leave blank), Director, Year
Enter Movie Name, Director, already in the Database
Enter no Movie Name, no Director, no Year (all blanks)
Click Save
Click Save
Click Save
(continued)
73
06 201006 Ch03.qxd
74
5/29/03
8:58 AM
Page 74
Chapter 3
Expected Results (Postconditions)
Error Message # 53- “Please add Movie Name”
Error Message #54- “That Movie with that Director is already in the database. Do you want to edit that record or enter a different movie?”
Error Message # 53- “Please add Movie Name.”
Test Case Number
7
8
9
Initial Condition (Preconditions)
User is on Content Edit Main Page
User is on Content Edit Main Page
User is on Content Edit Main Page
Actor
Content Editor
Content Editor
Content Editor
Action
Click to Add a movie Page.
Click to Add a movie Page.
Click to Add a movie Page.
Enter Movie Name and Year already in the database with different Director
Enter Movie Name and Director already in database with different Year.
Enter Special Characters in any text field.
Click Save
Click Save
Click Save
Movie added to Database. Test by SQL query or Search through UI. Page is Add Actor page
Error Message #54- “That Movie with that Director is already in the database. Do you want to edit that record or enter a different movie?”
Unknown Result
Test Data: Not detailed in Use Case 2.1
Not likely but a test case. What should the expected result be?
Not detailed in Use Case 2.1
Pass/Fail/Blocked Defect Number Notes
Expected Results (Postconditions)
Pass/Fail/Blocked Defect Number Notes
Figure 3.14 Add a Movie test cases from content management use cases.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 75
Software Testing Basics
Templates for Use-Case Diagram, Text, and Test Case Figure 3.15 contains a basic template from Smartdraw.com. Figures 3.16 and 3.17 contain use case templates from Processimpact.com.
Condition Combination A long-standing challenge in software testing is to find enough time to execute all possible test cases. There are numerous approaches that can be taken to strategically reduce the number of test cases to a manageable amount. The riskiest approach is to randomly reduce test cases without a clear methodology. A better approach is to divide the total test cases over a series of software builds. The condition combination approach involves the analysis of combinations of variables, such as browser settings. Each combination represents a condition to be tested with the same test script and procedures. The condition combination approach involves the following: ■■
Identifying the variables.
■■
Identifying the possible unique values for each variable.
■■
Creating a table that illustrates all the unique combinations of conditions that are formed by the variables and their values.
use case
use case Actor use case Figure 3.15 Sample use case diagram from www.smartdraw.com.
75
06 201006 Ch03.qxd
76
5/29/03
8:58 AM
Page 76
Chapter 3 Use Case ID: Use Case Name: Created By:
Last Updated By:
Date Created:
Date Last Updated:
Actor:
Who is using this case?
Description – Purpose of Case:
The purpose of the Use Case...
Preconditions:
The Use does X, Y, and Z
Postconditions:
The user experiences A, B & C
Normal Course of Events:
The Use Case begins...
Secondary Scenarios:
If the User is not online....
Exceptions:
List Here...
Uses/Extends:
List Here...
User Interface Components and Objects for Use Case: UI Control/Object
Action
Response/Description
Miscellaneous Special Requirements:
List Here...
Notes and Issues:
List Here...
Template for Use Case Driven Test Cases Product Name:
Test Environment:
Test Case Title:
Time:
Test Suite:
Build :
Tester Name:
Version:
Date:
Time to Complete Tests:
Figure 3.16 Use case template from processimpact.com.
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 77
Software Testing Basics Use Case ID: #.# Use Case Name: Functional Example Area Path or Scenario: XXX
Test Case Number
1
2
Initial Condition (Preconditions) Actor Action Expected Results (Postconditions) Pass/Fail/Blocked Defect Number Notes Figure 3.17 Test case template from processimpact.com.
Figures 3.18 and 3.19 illustrate an application that includes three variables each with three possible unique values. The number of complete combinations formed by the variables is 3 × 3 × 3 = 27. The 27 unique combinations (test cases) formed by the three variables A, B, and C are listed in Table 3.1. To execute the test cases calculated by these unique combinations, set the values for each A, B, and C variable using the variables listed in the corresponding rows of the tables. Execute the procedures and verify expected results.
Figure 3.18 Simplified application example.
77
06 201006 Ch03.qxd
78
5/29/03
8:58 AM
Page 78
Chapter 3
A 2
1
3
1
B 2
3
1
C 2
3
Figure 3.19 Unique combinations.
The Combinatorial Method The combinatorial method is a thoughtful means of reducing test cases via a pairwise shortcut. It involves analyzing combinations of variables, such as browser settings, one pair at a time. Each unique combination pair represents a condition to be tested. By examining and testing pair combinations, the number of total conditions to be tested can be dramatically reduced. This technique is useful when complete condition combination testing is not feasible. The combinatorial method involves the following: ■■
Identifying the variables.
■■
Identifying the possible unique values for each variable.
■■
Identifying the unique combinations formed by the variables, one pair at a time.
■■
Creating a table that illustrates all of the unique combinations of conditions that are formed by the variables and their values.
Table 3.2 Total Unique Combinations Case
A
B
C
Case
A
B
C
Case
A
B
C
1
1
1
1
10
2
1
1
19
3
1
1
2
1
1
2
11
2
1
2
20
3
1
2
3
1
1
3
12
2
1
3
21
3
1
3
4
1
2
1
13
2
2
1
22
3
2
1
5
1
2
2
14
2
2
2
23
3
2
2
6
1
2
3
15
2
2
3
24
3
2
3
7
1
3
1
16
2
3
1
25
3
3
1
8
1
3
2
17
2
3
2
26
3
3
2
9
1
3
3
18
2
3
3
27
3
3
3
06 201006 Ch03.qxd
5/29/03
8:58 AM
Page 79
Software Testing Basics ■■
Generating the unique combinations formed by the first pair, A-B. As illustrated in Table 3.3, arrange the values in the C column to cover the combinations of the B-C and A-C pairs without increasing the number of cases. Set the value of the variables A, B, and C using the information listed in each row of the table, one at a time. Execute the test procedure and verify the expected output.
For more information on this technique, go to AR GREENHOUSE at www.argreenhouse.com. For a paper on this topic, “The AETG System: An Approach to Testing Based on Combinatorial Design” (Cohen et al., 1997), go to www.argreenhouse.com/papers/gcp/AETGieee97.shtml. Table 3.3 The Combinatorial Method Case
A
B
B
C
Case
A
1
1
1
10
1
1
19
1
1
2
1
2
11
2
2
20
1
2
3
1
3
12
3
3
21
1
3
4
2
1
13
1
2
22
2
2
5
2
2
14
2
3
23
2
3
6
2
3
15
3
1
24
2
1
7
3
1
16
1
3
25
3
3
8
3
2
17
2
1
26
3
1
9
3
3
18
3
2
27
3
2
Case
A
B
C
1
1
1
1
2
1
2
2
3
1
3
3
4
2
1
2
5
2
2
3
6
2
3
1
7
3
1
3
8
3
2
1
9
3
3
2
C
Case
A
B
C
79
06 201006 Ch03.qxd
80
5/29/03
8:58 AM
Page 80
Chapter 3
Bibliography Cockburn, Alistair. Writing Effective Use Cases. New York: Addison-Wesley, 2001. Cohen, D.M. Dalal, S.R., Fredman, M.L., Patton, C.G. “The AETG System: An Approach to Testing Based on Combinatorial Design,” in IEEE Transactions On Software Engineering, Vol. 23, No. 7, July 1997, pp. 437 – 444. Jacobson, Ivar, and Magnus Christerson, Patrik Jonsson, and Gunnar Övergaard. Object-Oriented Software Engineering: A Use-Case-Driven Approach, Wokingham, England: Addison-Wesley, 1992. Kaner, Cem, Jack Falk, Hung Q. Nguyen. Testing Computer Software, 2nd ed. New York: John Wiley & Sons, Inc., 1999. LogiGear Corporation. QA Training Handbook: Testing Web Applications. Foster City, CA: LogiGear Corporation, 2000. ——— QA Training Handbook: Testing Windows Desktop and Server-Based Applications. Foster City, CA: LogiGear Corporation, 2000. ——— QA Training Handbook: Testing Computer Software. Foster City, CA: LogiGear Corporation, 2000. QACity.com: www.qacity.com.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 81
CHAPTER
4 Networking Basics
Why Read This Chapter? Networks hold Web systems together; they provide connectivity between clients and servers. The reliability, bandwidth, and latency of network components such as T1 lines and routers directly influence the performance of Web systems. Having knowledge of the networking environment enables you to identify configuration and compatibility requirements for your test planning, and enhances your bug-analysis abilities. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ The Basics ◆ Other Useful Information ◆ Testing Considerations ◆ Bibliography
81
07 201006 Ch04.qxd
82
5/29/03
8:58 AM
Page 82
Chapter 4
Introduction This chapter delivers a brief introduction to networking technologies; the information supports the effective planning, testing, analysis of errors, and communication that is required for the testing of Web applications. Network topologies, connection types, and hardware components are also discussed. The chapter also offers test examples and testing considerations that pertain to networking. POSSIBLE ENVIRONMENTAL PROBLEMS THAT MAY CAUSE AN APPLICATION TO OPERATE INCORRECTLY ■■
Either the client or server may be inaccessible because it is not connected to the network.
■■
There may be a failure in converting a Domain Name Service (DNS) name to an Internet Protocol (IP) address.
■■
A slow connection may result in a time-out.
■■
There may be an authentication process failure due to an invalid ID or password.
■■
The server or client may be incorrectly configured.
■■
Firewall may block all or part of the transmitted packets.
■■
Childproofing software may be blocking access to certain servers or files.
The Basics The material in this section introduces, in turn, network types, connectivity services, and hardware devices, and provides other useful information on such topics as TCP/IP, IP addresses, DNS, and subnetting/supernetting.
The Networks Networks comprise the delivery system that offers connectivity that glues clients, servers, and other communication devices together.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 83
Networking Basics
The Internet The Internet’s infrastructure is built of regional networks, Internet service providers (ISPs), high-speed backbones, network information centers, and supporting organizations (e.g., the Internet Registry and, recently, the Internet Corporation for Assigned Names and Numbers (ICANN)). Web systems don’t exist without the Internet and the networked structures of which the Internet is composed. Understanding how information moves across the Internet, how client-side users gain access to the Internet, and how IPs relate to one another can be useful in determining testing requirements. As illustrated in Figure 4.1, government-operated backbones or very highspeed Backbone Network Services (vBNSs) connect supercomputer centers together, linking education and research communities. These backbones serve as the principal highways that support Internet traffic. Some large organizations, such as NASA, provide Internet backbones for public use.
ISP
vBNS Backbone
Supercomputer
Supercomputer
Regional Network
ISP Figure 4.1 The Internet.
83
07 201006 Ch04.qxd
84
5/29/03
8:58 AM
Page 84
Chapter 4
Internet service providers and regional networks connect to the backbones. Internet service providers are private organizations that sell Internet connections to end users; both individuals and companies can gain Internet access through ISPs. Online services such as America Online sell access to private sectors of the Internet, in addition to the general Internet. Regional networks are groups of small networks that band together to offer Internet access in a certain geographical area. These networks include companies and online services that can provide better service as groups than they can independently.
Local Area Networks (LANs) Web-based applications operating over the Internet normally run on local area networks (LANs). The LANs are relatively small groups of computers that have been networked to one another. Local area networks are often set up at online services; government, business, and home offices; and other organizations that require numerous computers to regularly communicate with one another. Two common types of LANs are Ethernet networks and token-ring networks. Transmission Control Protocol/Internet Protocol (TCP/IP), the suite of network protocols enabling communication among clients and servers on a Web system, runs on both of these popular network topologies. On an Ethernet LAN, any computer can send packets of data to any other computer on the same LAN simultaneously. With token-ring networks, data is passed in tokens (packets of data) from one host to the next, around the network, in a ring or star pattern. Figure 4.2 illustrates simple token-ring and Ethernet networks, respectively.
IBM compatible Server
Macintosh
Workstation
Server Token-Ring
Ethernet
Workstation Workstation Workstation Macintosh Figure 4.2 Token-ring and Ethernet networks.
Printer
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 85
Networking Basics
Typically, a LAN is set up as a private network. Only authorized LAN users can access data and resources on that network. When a Web-based system is hosted on a private LAN (its services are only available within the LAN) and application access is only available to hosts (computers) within the LAN or to trusted hosts connected to the LAN (e.g., through remote-access service (RAS)), the Web-based system is considered an intranet system.
Wide Area Networks (WANs) Multiple LANs can be linked together through a wide area network (WAN). Typically, a WAN connects two or more private LANs that are run by the same organization in two or more regions. Figure 4.3 is an illustration of an X.25 (one of several available packet-routing service standards) WAN connecting computers on a token-ring LAN in one geographic region (San Jose, California, for example) to computers on another Ethernet LAN in a different geographic region (Washington, DC, for example).
X.25 Network Cloud
IBM compatible Server
Macintosh
Workstation
Server Ethernet
Token-Ring Workstation Workstation Workstation Macintosh Figure 4.3 Wide area networks (WANs).
Printer
85
07 201006 Ch04.qxd
86
5/29/03
8:58 AM
Page 86
Chapter 4
Connecting Networks There are numerous connectivity services and hardware options available for connecting networks to the Internet, as well as to each other; countless testingrelated issues may be affected by these components.
Connectivity Services The two common connection types are dial-up connection and direct connection, which are discussed in turn next. Dial-Up Connection
One very familiar connection service type is the dial-up connection, made through a telephone line. Plain Old Telephone Service (POTS). POTS is the standard analog telephone line used by most homes and businesses. A POTS network is often also called the public switched telephone network (PSTN). Through an analog modem, a POTS connection offers a transmission rate of up to 56 kilobits per second (Kbps). Integrated Services Digital Network (ISDN). The ISDN lines are highspeed dial-up connections over telephone lines. The ISDN lines with which we are familiar can support a data transmission rate of 64 Kbps (if only one of the two available wires is used) or 128 Kbps (if both wires are used). Although not widely available, there is a broadband version (as opposed to the normal baseband version) of ISDN, called B-ISDN. The B-ISDN supports a data transmission rate of 1.5 megabits per second (Mbps), but it requires fiber-optic cable.
Direct Connection In contrast to dial-up, another series of connection service type is direct connection such as leased-line, including T1, T3, cable modem, and DSL. T1 connection. T1s (connection services) are dedicated, leased telephone lines that provide point-to-point connections. They transmit data using a set of 24 channels across two-wire pairs. One-half of each pair is for sending, the other half is for receiving; combined, the pairs supply a data rate of 1.54 Mbps. T3 connection. T3 lines are similar to T1 lines except that, instead of using 24 channels, T3 lines use 672 channels (an equivalent of 28 T1
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 87
Networking Basics
lines), enabling them to support a much higher data transmission rate: 45 Mbps. Internet service providers and Fortune 500 corporations that connect directly to the Internet’s high-speed backbones often use T3 lines. Many start-up Internet companies require bandwidth comparable with a T3 to support their e-business infrastructures, yet they cannot afford the associated costs; the alternative for these smaller companies is to share expensive high-speed connections with larger corporations. DS connection services. DS connection services are fractional or multiple T1 and T3 lines. T1 and T3 lines can be subdivided or combined for fractional or multiple levels of service. For example, DS-0 provides a single channel (out of 24 channels) of bandwidth that can transmit 56 Kbps (kilobits per second). DS-1 service is a full T1 line; DS-1C is two T1 lines; DS-2 is four T1 lines; DS-3 is a full T3 line. Digital subscriber line (DSL). The DSL offers high-bandwidth connections to small businesses and homes via regular telephone lines. There are several types of DSL, including Asymmetric Digital Subscriber Line (ADSL), which is more popular in North America, and Symmetric Digital Subscriber Line (SDSL). The ADSL supports a downstream transmission rate (receiving) of 1.5 to 9 Mbps, and an upstream transmission rate (sending) of 16 to 640 Kbps. The DSL lines carry both data and traditional voice transmissions; the data portion of the bandwidth, however, is always connected. Cable connection services. Through a cable modem, a computer can be connected to a local cable TV service line, enabling a data transmission rate, or throughput, of about 1.5 Mbps upstream (sending) and an even much higher rate for downstream (receiving). However, cable modem technology utilizes a shared medium in which all of the users served by a node (between a couple hundred to a couple thousand homes, depending on the provider) share bandwidth. Therefore, the throughput can be affected by the number of cable modem users in a given neighborhood and the types of activities in which those users are engaged on the network. In most cases, cable service providers supply the cable modems and Ethernet interface cards as part of the access service. Internet Connection Hardware
To connect a terminal or a network to the Internet, a hardware device such as a modem must be used to enable the communication between each side of the connection. With POTS dial-up connections, analog modems are used. With ISDN, ISDN (digital) modems are used. With DSL and cable connections, DSL modems and cable modems are used.
87
07 201006 Ch04.qxd
88
5/29/03
8:58 AM
Page 88
Chapter 4
With leased lines such as T1, T3, and other DS connection services, a channel service unit/data service unit (CSU/DSU) device is used. Though actually two different units, they are often packaged as one. You may think of CSU/DSU as an expensive and powerful version of a modem that is required at both ends of the leased-line connection.
Other Network Connectivity Devices Local area networks employ several types of connectivity devices to link them together. Some of the common hardware devices include: Repeaters. Used to amplify data signals at certain intervals to ensure that signals are not distorted or lost over great distances. Hubs. Used to connect groups or segments of computers and devices to one another so that they can communicate on a network, such as a LAN. A hub has multiple ports. When a data packet arrives at one port, it is replicated to the other ports so that computers or devices connected to other ports will see the data packet. Generally, there are three types of hubs. ■■
Bridges. Used to connect physical LANs that use the same protocol into a single, logical network. Bridges examine incoming messages and pass the messages on to the appropriate computers, on either a local LAN or a remote LAN.
■■
Routers. Used to ensure that data are delivered to the correct destinations. Routers are like bridges, except that they support more features. Routers determine how to forward packets, based on IP address and network traffic. When they receive packets with a destination address of a host that is outside of the network or subnetwork, they route the packets to other routers outside of the network or subnetwork so that the packets will eventually reach their destination. Routers are often not necessary when transmitting data within the same network, such as over a LAN.
■■
Gateways. Used like routers, except that they support even more features than routers. For example, a gateway can connect two different types of networks, enabling users from one network (Novell IPX/SPX, for example) to exchange data with users on a different network type (for example, TCP/IP).
Figure 4.4 illustrates a sample configuration in which a bridge, router, or gateway is used to connect the two networks or subnetworks.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 89
Networking Basics
Server
Macintosh
Workstation
Ethernet
Workstation
Printer
Bridge, router or gateway
Server
Macintosh
Workstation
Ethernet
Workstation
Disk Array
Figure 4.4 Bridges, routers, and gateways.
TCP/IP Protocols The Internet is a packet-switched network, meaning that all transmitted data objects are broken up into small packets (each less than 1,500 characters). The packets are sent to the receiving computer where they are reassembled into the original object. The TCP is responsible for breaking up information into packets and reassembling packets once they reach their destination. Each packet is given a header that contains information regarding the order in which packets should be reassembled; the header also contains a checksum, which records the precise amount of information in each packet. Checksums are used to determine, on the receiving end, if packets were received in their entirety.
89
07 201006 Ch04.qxd
90
5/29/03
8:58 AM
Page 90
Chapter 4
The IP is responsible for routing packets to their correct destinations. The IP puts packets into separate IP envelopes that have unique headers. The envelope headers provide such information as the receiver’s and the sender’s addresses. The IP envelopes are sent separately through routers to their destination. The IP envelopes of the same transmission may travel different routes to reach the same destination—often arriving out of order. Before reassembling the packets on the receiving end, TCP calculates the checksum of each packet and compares it with the checksum of the original TCP headers. If the checksums do not match, TCP discards the unmatched packets and requests the original packets to be resent.
The TCP/IP Architecture For computers to communicate over the Internet, each computer, client or server, must utilize a standard set of protocols called TCP/IP. This suite of protocols is referred to as a TCP/IP stack or socket. There are numerous versions of TCP/IP stack available, for every target platform and operating system (UNIX, PC, Macintosh, handheld devices, etc.). The TCP/IP stack, as illustrated in Figure 4.5, is composed of five layers: application, transport, Internet, data link, and physical. The Application Layer
The top layer of the TCP/IP protocol is the application layer. End-user applications interact with this layer. The protocols in this layer perform activities such as enabling end-user applications to send, receive, and convert data into their native formats, and establishing a connection (session) between two computers.
Application Transport Internet Data Link Physical Figure 4.5 TCP/IP stack architecture.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 91
Networking Basics
Examples of several common protocols associated with the application layer include: HyperText Transfer Protocol (HTTP). Commonly used in browsers to transfer Web pages and other related data between clients and servers across the Internet. File Transfer Protocol (FTP). Commonly used in browsers or other applications to copy files between computers by downloading files from one remote computer and uploading them to another computer. Network News Transfer Protocol (NNTP). Used in news reading applications to transfer USENET news articles between servers and clients, as well as between servers. Simple Mail Transfer Protocol (SMTP). Used by e-mail applications to send e-mail messages between computers. Dynamic Host Configuration Protocol (DHCP). Used in server-based applications to allocate shared IP addresses to individual computers. When a client computer requires an IP address, a DHCP server assigns the client an IP address from a pool of shared addresses. For example, a network may have 80 workstations, but only 54 IP addresses available. The DHCP allows the 80 workstations to share the 54 IP addresses in a way that is analogous to an office with 80 employees who share a phone system with only 54 trunk lines. In this scenario, it is expected that in normal operation no more than 54 employees will be on the phone at the same time. That is, the 55th employee and beyond will not be able to get onto the system. The Transport Layer
The transport layer breaks data into packets before sending them. Upon receipt, the transport layer ensures that all packets arrive intact. It also arranges packets into the correct order. Examples of two common protocols associated with the transport layer are the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Both TCP and UDP are used to transport IP packets to applications and to flow data between computers. TCP ensures that no transported data is dropped during transmissions. Error checking and sequence numbering are two of TCP’s important functions. TCP uses IP to deliver packets to applications, and it provides a reliable stream of data between computers on networks. Once a packet arrives at its destination, TCP delivers confirmation to the sending and
91
07 201006 Ch04.qxd
92
5/29/03
8:58 AM
Page 92
Chapter 4
receiving computers regarding the transmitted data. It also requests that packets be resent if they are lost. ■■
TCP is referred to as a connection-oriented protocol. Connection-oriented protocols require that a channel be established (a communications line between the sending and receiving hosts, such as in a telephone connection) before messages are transmitted.
■■
UDP is considered a connectionless protocol. This means that data can be sent without creating a connection to the receiving host. The sending computer simply places messages on the network with the destination address and “hopes” that the messages arrive intact.
UDP does not check for dropped data. The benefit of being connectionless is that data can be transferred more quickly; the drawback is that data can more easily be lost during transmission. The Internet Layer
The Internet layer receives data packets from the transport layer and sends them to the correct network address using the IP. The Internet layer also determines the best route for data to travel. Examples of several common protocols associated with the Internet layer include the following: Internet Protocol (IP). Responsible for basic network connectivity. Every computer on a TCP/IP network has a numeric IP address. This unique network ID enables data to be sent to and received from other networks, similar to the way that a traditional street address allows a person to send and receive snail mail. Address Resolution Protocol (ARP). Responsible for identifying the address of a remote computer’s network interface card (such as an Ethernet interface) when only the computer’s TCP/IP address is known. Reverse Address Resolution Protocol (RARP). The opposite of ARP. When all that is known is a remote computer’s network interface card hardware address, RARP determines the computer’s IP address. The Data Link Layer
The data link layer moves data across the physical link of a network. It splits outgoing data into frames and establishes communication with the receiving end to validate the successful delivery of data. It also validates that incoming data are received successfully. The Physical Layer
The physical layer is the bottom layer of the TCP/IP stack. It supports the electrical or mechanical interface of the connection medium. It is the hardware
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 93
Networking Basics
layer and is composed of a network interface card and wiring such as coaxial cable, 10/100-Based-T wiring, satellite, or leased-line.
Testing Scenarios Normally, with Web-based systems, we may not have to be concerned with issues related to connection services, connectivity devices, or how the TCP/IP stack may affect the applications. When an HTTP-based (i.e., Web browserbased) application runs within the context of a third-party browser (e.g., Netscape Navigator or Microsoft Explorer), one can argue that how a TCP/IP connection is established, which hardware components are used on the network, or the connection throughput does not seem to matter. But when we understand the basics of the technologies, we can more accurately determine which parts need testing and which parts can be left alone. HOW TCP/IP PROTOCOLS WORK TOGETHER Figure 4.6 illustrates a simplified version of the data-flow processes that occur when a user sends an e-mail message. The process on the sender’s end begins at the top layer, the application layer, and concludes at the physical layer, where the e-mail message leaves the sender’s network.
User A sends an e-mail message to User B Application
Transport Internet Data Link Physical
E-mail program sends message to the Transport Layer via SMTP.
The Transport Layer receives the data, divides it into packets, and adds transport header information. Data packets are placed in IP datagram with datagram headers. IP determines where the datagrams should be sent (directly to destination or else to a gateway). The network interface transmits the IP datagrams, as frames, to the receiving IP address. Network hardware and wires support transmission.
Figure 4.6 E-mail sent. (continued)
93
07 201006 Ch04.qxd
94
5/29/03
8:58 AM
Page 94
Chapter 4 HOW TCP/IP PROTOCOLS WORK TOGETHER (continued) The process continues on the receiver’s end, working in reverse order. The physical layer receives the sender’s message and passes it upward until it reaches the receiver’s application layer (see Figure 4.7).
User B receives an e-mail message Frames received by the network go through the protocol layers in reverse. Each layer strips off the corresponding header information until the data reaches the application level.
Application
Transport Internet Data Link Physical
The data is displayed to the user. The user can now interact with the data through an e-mail application.
The Transport Layer checks the packets for accuracy and reassembles the packets. The Internet Protocol strips off the IP header.
The Physical and Data Link Layers receive datagrams in the form of frames, and pass them onto the Internet Layer.
Figure 4.7 E-mail received.
Generally, the two classes of testing-related issues that need coverage are: (1) configuration and compatibility, and (2) performance. By carefully analyzing the delivered features and the supported system configurations, we can reasonably determine the testing requirements for configuration and compatibility as well as for performance.
Connection Type Testing Usually, the issues associated with various types of connection revolve around throughput and performance rather than configuration and compatibility. For example, login fails to authenticate with dial-up connections, but it works properly with direct connection. This symptom may have a number of
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 95
Networking Basics
causes, but one common issue is that the slow connection causes a time-out in the login or authentication process. With slow connections such as dial-up, it may take too long (longer than the script time-out value) for the client-server to send/receive packets of data; thus, the script will eventually time-out, causing the login or authentication process to fail. The problem cannot, however, be reproduced when the same procedure is retried on an intranet or a LAN connection. As described earlier, we often work with two types of connections that offer us various throughput rates: direct connection and dial-up connection. Common direct connection configurations to consider include: ■■
Standard LAN and/or WAN connections (intranet)
■■
Standard LAN and/or WAN connections with a gateway to the Internet using T1, T3, and DS services; DSL; or cable services
■■
Stand-alone connections to the Internet using DSL or cable services
Common dial-up connection configurations to consider include: ■■
Stand-alone connections to the Internet through an ISP directly, using POTS lines or ISDN lines (see Figure 4.8 for an example)
T1 or T3
Internet
ISP
Long-Distance Telephone Company
Copper, fiber-optic, or satellite links Local Telephone Company Figure 4.8 Dial-up connection.
Modem POTS or ISDN
Desktop Computer
95
07 201006 Ch04.qxd
96
5/29/03
8:58 AM
Page 96
Chapter 4
In the standard dial-up model (Figure 4.8), the client is a PC that is connected to a modem. Through a local telephone line or ISDN, a connection is made to an ISP. Depending on whether the ISP is local or not, the local phone company may have to connect (via satellite, copper, or fiber-optic cable) to the ISP through a long-distance carrier. The ISP also has a modem to receive the phone call and to establish a connection to the PC. ■■
Stand-alone connections to the intranet (LAN) through RAS, using POTS lines or ISDN lines
■■
Stand-alone connections to the intranet (LAN) through virtual private network (VPN) services, using POTS lines or ISDN lines
■■
Stand-alone connections to the intranet (LAN) through RAS, using POTS lines or ISDN lines, and then to the Internet using a leased line (see Figure 4.9 for an example)
Differing from the model in which the client dials up through an ISP is the model of the client dialing up through an RAS. If a LAN is connected directly to the local phone company, there is no need for a long-distance telephone connection. In Figure 4.9, the modem on the server-side receives the connection from the local phone company and translates it for the RAS; after proper authentication, LAN resources are made available to the user. If the LAN has a leased line, the user can link to an ISP and, ultimately, to the Internet through the local phone company.
Local Telephone Company
Internet ISP
Digital Leased Line RAS
Desktop computer
Long-Distance Telephone Company
Corporate Server Ethernet
Desktop Computer Corporate Network
Copper, fiber-optic, or satellite links Local Telephone Company
Users of the Web system under test may be dialing in with a modem that translates digital computer signals into analog signals; the analog signals are carried over POTS lines. The brand names and baud rates (generally ranging from 14.4 to 56 Kbps) of these modems may affect the perceived performance of the Web system under test. Generally, a modem is a “doesn’t matter” issue to a Web application. However, if your application is an embedded browser that also provides drivers for users to connect to certain modems, then the connection type and modem brands may be an issue for testing. If modem compatibility issues are a concern for the system under test, then both client- and server-side modems should be tested. Potential Dialer Compatibility Issues
Dialer compatibility testing is often required when a Web system interacts with a dialer. Some ISPs, such as EarthLink and AOL, supply users with proprietary dialers. Dialup Networking has more than one version of its dialer. Some ISPs supply their new users with CD-ROMs that replace existing browsers and dialers so that users can connect to their services. Such CDs often install new components, which can cause incompatibility or conflict problems that may lead to errors such as a system crash. Some dialers also offer users a couple of protocol options from which to choose. Two common dial-up protocols are Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP). The SLIP is the older of the two, but Point-toPoint is the most popular, as well as the most stable; it enables point-to-point connections and, when necessary, can retransmit garbled data packets. If the Web application under test is an embedded application that also delivers a dialer that supports more than one dial-up protocol, compatibility testing should be considered. Otherwise, this is usually a “doesn’t matter” issue to standard browser-based application testing.
Connectivity Device Testing Do we need to test our HTTP-based application with various brands and models of hubs, repeaters, bridges, routers, and gateways under various configurations? Hopefully, the answer is no, because a standard Web browser-based application does not interact directly with such devices. However, if a Web application under test is a custom-embedded application that supports several protocols at different layers of the TCP/IP stacks, incompatibility issues may be introduced in interactions with the connectivity devices. For example, assume that an embedded HTTP-based application uses Reverse Address Resolution Protocol (RARP) at the Internet layer of the TCP/IP stacks to determine the computer’s IP address; in this case, compatibility tests should be conducted with connectivity devices that support RARP, such as routers and gateways.
97
07 201006 Ch04.qxd
98
5/29/03
8:58 AM
Page 98
Chapter 4
Application Gateway
Transport Internet Router
Repeater
Data Link Physical
Bridge
Figure 4.10 Network layer/device interaction.
Many hardware devices do interact with different layers of the TCP/IP stack. Figures 4.10 and 4.11 illustrate the differences in intelligence and network layer interaction that these devices exhibit. Understanding the implementation and support of Web-based applications in the context of TCP/IP layering allows you to determine if configuration and compatibility testing of hardware devices (such as gateways and routers) will be necessary.
Figure 4.11 Network layer protocols and recognized addresses.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 99
Networking Basics
Other Useful Information This section offers an overview of how IP addresses, DNS, and network subnets work; the intent here is to help testers become better at analyzing errors, as well as troubleshooting network-/Web-related issues.
IP Addresses and DNS Every network device that uses TCP/IP must have a unique domain name and IP address. Internet Protocol addresses are 32-bit numbers—four fields of 8 bits each, each field separated by a dot (Figure 4.13). To better understand IP addresses, it is helpful to review the binary model of computer data storage (Figure 4.12).
Binary Numbers:
Position Value One-bit
128
64
32
16
8
4
2
1
1
1
1
1
1
1
1
1
+0
+0
+0
+1
0
0
0
1
+0
+0
+2
+1
0
0
1
1
+0
+4
+0
+1
0
1
0
1
+8
+4
+2
+1
1
1
1
1
8-bit number 0
+0
+0
+0
Example 1: Decimal Value Binary Value
1 0
0
0
0
8-bit number 0
+0
+0
+0 3
Example 2: Decimal Value Binary Value
0
0
0
0
8-bit number 128
+0
+0
+0
1
0
0
0
133
Example 3: Decimal Value Binary Value
8-bit number 128
+64
+32
+16 255
Example 4: Decimal Value Binary Value
1
1
1
1
8-bit number Figure 4.12 Binary model of computer data storage.
99
07 201006 Ch04.qxd
100
5/29/03
8:58 AM
Page 100
Chapter 4
Binary is base two; it differs from the standard numerical system, which is base ten. Base two (binary) dictates that each digit, or bit, may have one of two values: 1 (meaning on) and 0 (meaning off). The value of a bit depends on its position. Figure 4.12 includes four examples of standard numerals expressed in the binary model: 1, 3, 133, and 255. Starting from right to left, each of the 8-bit positions represents a different number. Depending on the numeral being expressed, each bit is set either to on or off. To calculate the expressed numeral, the on-bit positions must be added up. In the fourth example, note that all positions are set to on, and the resulting value—the maximum value for an 8-bit number—is 255.
IP Address Internet Protocol addresses are segmented into two numbers: a network number and a host number. The network number identifies a specific organization’s network that is connected to the Internet. Within that network there are specific host computers on individual desktops. These host computers are identified by host numbers. The number of hosts that a network can support depends on the class of the network. Figure 4.13 is an example of a Class C IP address.
Network Classes The Internet is running low on available IP addresses. This is not due to a limitation of the Internet itself or even of software; rather, it is a limitation of the naming convention, or dotted-decimal notation, the industry has established to express IP addresses. Simply put, there are mathematical limitations to the amount of numbers that can be expressed in the 32-bit model.
192.9.200.15 Network Number
Host Number
11000000
00001001
11001000
00001111
8-bit
8-bit
8-bit
8-bit
An IP Address is a 32-bit number.
Figure 4.13 Class C IP address.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 101
Networking Basics
THREE CLASSES OF TCP/IP NETWORKS ■■
Class A networks. There are only 126 class A network addresses available. Class A networks can support an enormous number of host devices—16,777,216. Not many organizations require access to such a large number of hosts. America Online, Pacific Bell, and AT&T are some of the organizations that have class A networks. Class A networks use only the first 8 bits of their IP addresses as the network number. The remaining 24 bits are dedicated to host numbers.
■■
Class B networks. Class B networks can support approximately 65,000 hosts. The Internet can support a maximum of 16,384 class B networks. Class B networks are quite large, but nowhere near as large as class A. Universities and many large organizations require class B networks. Class B networks use the first 16 bits of their IP addresses as the network number. The remaining 16 bits are dedicated to host numbers.
■■
Class C networks. Class C networks are both the most common and the smallest network class available. There are more than 2 million class C networks on the Internet. Each class C network can support up to 254 hosts. Class C networks use the first 24 bits of their IP addresses as the network number. The remaining 8 bits are dedicated to host numbers.
Domain Name System (DNS) Although identifying specific computers with unique 32-bit numbers (IP addresses) makes sense for computers, humans find it very challenging to remember network and host names labeled in this way. That is why Sun Microsystems developed the Domain Name Service (DNS) in the early 1980s. DNS associates alphabetic aliases with numeric IP addresses. The DNS servers match simple alphabetic domain names, such as logigear.com and netscape. com, with the 32-bit IP addresses that the names represent. With this method, Internet users only have to remember the domain names of the Internet sites they wish to visit. If a domain server does not have a certain IP address/ domain name match listed in its database, that server will route a request to another DNS that will, hopefully, be able to figure out the IP address associated with the particular domain name. E-mail addresses are made up of two main components that are separated by an @ symbol. The far right of every e-mail address includes the most general information, the far left includes the most specific. The far left of every
101
07 201006 Ch04.qxd
102
5/29/03
8:58 AM
Page 102
Chapter 4
e-mail address is the user’s name. The second part, to the right of the @ symbol, is the domain name. For example, in [email protected], webtester is the user name and qacity.com is the domain name. The domain name itself can be broken down into at least two components, each separated by a period. The far right component of the domain name is the extension. The extension defines the domain as being commercial (.com), network-based (.net), educational (.edu), governmental (.gov), small business (.biz), resource Web sites (.info), content rich Web sites (.tv), or military (.mil). Countries outside the United States have their own extensions: Canada (.ca), Great Britain (.uk), and Japan (.jp) are a few of these. To the left of the domain extension is the name of the host organization, or ISP (.logigear, .compuserve, etc.). Often, domain names are further subdivided, as in [email protected]. In this example, montreal is the host name; this is the specific host computer that acts as the “post office” for webtester’s e-mail. Figure 4.14 shows examples of domain names.
When an e-mail is sent to, for example, [email protected], a DNS server translates the letters of the domain name (qacity.com) into the associated numerical IP address. Once in numeric form, the data is sent to the host computer that resides at the domain. The host computer (montreal) ultimately sends the e-mail message to the specific user (webtester).
Subnet Subnets divide a single network into smaller networks, or network segments. Routers are used to send information from one subnet to another. Subnets are useful in managing IP address allotment. For example, assume an organization, with two physical locations, has a class C network and, therefore, has only 254 IP addresses available to distribute to its employees. This organization could request a second class C network to service the second location. But what if the organization is not currently using all of its IP addresses? Getting a second network address would be wasteful. Instead, a subnet would enable this organization to partition its existing class C network into two subnetworks. Figure 4.17 shows a network divided into two subnets with two IP addresses (192.9.200.100 and 192.9.200.200). MISSING A DNS ENTRY When you are outside of the intranet and click on the QA Training or TRACKGEAR button in the page illustrated, the browser appears to hang, or you don’t get any response from the server. However, when you report the problem, the developer who accesses the same links cannot reproduce it. One of the possible problems is that the DNS entry for the server referenced in the link is only available in the DNS table on the intranet; that is, it is not known to the outside world. (See Figure 4.15.)
Figure 4.15 LogiGear screen capture. (continued)
103
07 201006 Ch04.qxd
104
5/29/03
8:58 AM
Page 104
Chapter 4 MISSING A DNS ENTRY (continued) TIPS 1. Use the View Source menu command to inspect the HTML source. 2. Look for the information that’s relevant to the links. In this example, you will find that clicking on the QA Training and the TRACKGEAR button will result in requests to the server authorized in the qacity.com domain. (See Figure 4.16.)
Figure 4.16 Checking the HTML Source. 3. Try to ping authorize.qacity.com to see if it can be pinged (see the section “Discovering Information about the System” in Chapter 13). 4. If the server cannot be pinged, tell your developer or IS staff so the problem can be resolved.
The benefits of subnetting an existing network over getting an additional network include: ■■
The same network number is retained for multiple locations.
■■
The outside world will not be aware that the network has been subdivided.
■■
A department’s network activities can be isolated from the rest of the network, thereby contributing to the stability and security of the network as a whole.
■■
Network testing can be isolated within a subnet, thereby protecting the network from testing-based crashes.
■■
Smaller networks are easier to maintain.
■■
Network performance may improve due to the fact that most traffic remains local to its own subnet (for example, the network activities of business administration and engineering could be divided between two subnets).
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 105
Networking Basics IP Address: 192.9.200.15 Default Gateway: 192.9.200.100
IP Address: 192.9.200.16 Default Gateway: 192.9.200.100
Workstation 1 Workstation 2 Subnet B IP Address: 192.9.200.100 Router
IP Address: 192.9.200.200
Subnet A IP Address: 192.9.200.150 Default Gateway: 192.9.200.200
IP Address: 192.9.200.151 Default Gateway: 192.9.200.200
Workstation 3 Workstation 4
Figure 4.17 Subnetting a network.
Subnet Masks Subnet addresses are derived from the main network’s network number plus some information from the host section in the network’s IP address. Subnet masks tell the network which portion of the host section of the subnet address is being used as the network address. Subnet masks, like IP addresses, are 32-bit values. The bits for the network section of the subnet address are set to 1, and the bits for the host section of the address are set to 0. Each network class has its own default subnet mask (see Figure 4.18). Every computer on a network must share the same subnet mask, otherwise, the computers will not know that they are part of the same network.
Default Subnet Masks Class A Default 255.0.0.0 or 11111111.00000000.00000000.00000000 Class B Default 255.255.0.0 or 11111111.11111111.00000000.00000000 Class C Default 255.255.255.0 or 11111111.11111111.11111111.00000000 Figure 4.18 Subnet masks.
105
07 201006 Ch04.qxd
106
5/29/03
8:58 AM
Page 106
Chapter 4
As stated earlier, class C IP addresses have 24 bits to the left devoted to network address; class B IP addresses have 16 bits, and class A IP addresses have 8 bits. Internet Protocol addresses that are included in incoming messages are filtered through the appropriate subnet mask so that the network number and host number can be identified. As an example, applying the class C subnet mask (255.255.255.0) to the class C address (126.24.3.11) would result in a network number of 126.4.3 and a host number of 11. The value of 255 is arrived at when all bits of an IP address field are set to 1, or on. If all values in an IP address are set to 255, as in the default subnet masks, then there are no subnets at all.
Custom Subnets Subnet masks may be customized to divide networks into several subnets. To do this, some of the bits in the host portion of the subnet mask will be set to 1s. For example, consider an IP address of 202.133.175.18, or 11001010.10000101. 10101111.00010010: Using the default mask of 255.255.255.0, or 11111111. 11111111.11111111.00000000, the network address will be 202.133.175.0, and the host address IP address will be 18. If a custom mask, such as 255.255.255.240, or 11111111.11111111.11111111.11110000, is used, the network address will then be 202.133.175.16 (because 28 bits are used for the subnet address instead of 24 as in the default mask), and the host address will still be 18.
A Testing Example Following is an example of an embedded HTTP-based application handheld device that involves testing the host name and IP address resolution logics.
Host Name and IP Resolution Tests CONSIDERATIONS FOR THE SYSTEM UNDER TEST ■■
Adapter address
■■
IP address
■■
Subnet mask
■■
Host name resolved by DNS, WINS, or other technologies
■■
Dynamic Host Configuration Protocol (DHCP)
■■
Default gateway IP address
By the way, you often need to configure your network stack with the correct information for each of the items listed here to enable your computer or any devices connected to the network to operate properly.
07 201006 Ch04.qxd
5/29/03
8:58 AM
Page 107
Networking Basics
TESTING EXAMPLE SPECIFICATIONS ■■
There are two applications: one running on the remote host and the other running on the target host.
■■
The product supports Windows 9x, NT, 2000, or Chameleon TCP/IP stack.
■■
The remote host connects to the private network via a dial-in server.
■■
The product supports RAS and several popular PPP- or TCP/IP-based dial-in servers.
■■
From the remote host, a user enters the phone number, user name, and password that are required to connect to the desired dial-in server.
■■
The remote host establishes a connection with the target host. Therefore, information about the target host name, IP, and subnet mask must be registered on the remote host.
■■
The product supports static-based, as well as dynamic-based, IP addresses.
■■
The product also supports WINS- and DNS-based name/IP resolution.
When the target host IP changes, the product has code that relies on the host name alone, or on the host name and the subnet mask information, to dynamically determine the new IP address. In developing test cases to validate the functionality under various possible scenarios to which the system under test can be exposed, the following attributes are examined: The host name. May or may not be available on the device. IP address. May or may not be available on the device. Subnet mask. May be a standard or a custom mask. Name server—IP/name-resolving. Configured to use either WINS or DNS. Type of IP address. May be static or dynamic. A table is then developed to represent various unique combinations formulated by these five attributes and the possible values for each attribute. There are 32 combinations generated (see Table 4.1). Each combination is then configured and tested accordingly. Figure 4.19 shows a testing example. In considering testing for compatibility issues, six operating environments are identified, three of which are Windows 9x, NT, and 2000, with Microsoft default TCP/IP stack; the other three comprise the same set of operating systems with the latest version of Chameleon TCP/IP stack.
107
07 201006 Ch04.qxd
108
5/29/03
8:58 AM
Page 108
Chapter 4
Configured to dial-in and access resources on SUPERServer on behalf of myhost.softgeartech.com Name: myhost.softgeartech.com IPAddress: 202.133.175.18 Subnet mask: 255.255.255.240
SUPERServer
Server: WINS DNS DHCP
Dial-In Server: RAS PPP or TCP/IP-based
Figure 4.19 A testing example.
Testing Considerations ■■
If the application under test runs in its own embedded browser, analyze the application to determine if it utilizes any protocols beyond those at the application level. If it does, how would it affect your configuration and compatibility testing requirements with respect to connectivity devices?
■■
Determine the hardware and software configuration dependencies of the application under test. Develop a test plan that covers a wide mix of hardware and software configurations.
■■
Examine the Web application as a whole and consider the connection dial-up and direct connection methods. How would each type of connection affect the performance and functionality of the product?
■■
Will users be accessing the system via dial-up connections through an ISP? If so, connectivity may be based upon proprietary ISP strings, such as the parsing of a login script. Will remote users be accessing through an RAS?
■■
Will the application be installing any special modules, such as a dialer and associated components, that may introduce conflicts? Consider dialer platforms, versions, and brand names.
Chapter 4 VALIDATING YOUR COMPUTER CONNECTION Ensure that your test machines are properly configured and connected to the network before you begin testing. To check host connection and configuration in a Windows environment, read the following instructions. Windows NT offers a utility named ipconfig. Windows 9x has winipcfg, which has more of a user interface. 1a. For Windows NT, run IPCONFIG/ALL. 1b. For Windows 9x, run WINIPCFG. 2. Ping the exact value that is received from IPCONFIG and WINIPCFG. To make sure the DNS is working properly, also ping by the domain name. If positive responses are received, then there is a good TCP/IP connection. 3. To ensure that there is a proper TCP/IP connection, ping the loopback IP address: PING 127.0.0.1 or PING YourMachineIPAddress.
Bibliography Comer, Douglas. Internetworking with TCP/IP Vol. I: Principles, Protocols, and Architecture, 4th Ed. Upper Saddle River, NJ: Prentice-Hall PTR, 2000. Gralla, Preston. How the Internet Works. Emeryville, CA: Ziff-Davis Press, 1997. LogiGear Corporation. QA Training Handbook: Testing Web Applications. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Testing Windows Desktop and Server-Based Applications. Foster City, CA: LogiGear Corporation, 2003. ——— Foster City, CA: LogiGear Corporation, 2003. Orfali, Robert, Dan Harkey, Jeri Edwards. Client/Server Survival Guide, 3rd Ed. New York: John Wiley & Sons, Inc., 1999.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 111
CHAPTER
5 Web Application Components
Why Read This Chapter? Having an understanding of a Web application’s internal components and how those components interface with one another, even if only at a high level, leads to better testing. Such knowledge allows for the analysis of a program from its developer’s perspective—which is invaluable in determining test strategy and identifying the cause of errors. Furthermore, analyzing the relationship among the components leads to an understanding of the interaction of the work product from the perspective of several independent developers, as opposed to from only the individual developer’s perspective. Thus, you analyze the work product from a perspective that is not evident from the analysis of any individual component. You are asking how all these components interact with each other to make up the system. The gray-box tester provides this capability. You look at the system at a level that is different from that of the developer. Just like the black-box tester, you add a different perspective and, therefore, value. Generally, we learn about an application’s architecture from its developers during walk-throughs. An alternate approach is to do our own analysis by tracing communication traffic between components. For example, tests can be 111
08 201006 Ch05.qxd
112
5/29/03
8:58 AM
Page 112
Chapter 5 TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ Overview ◆ Web Application Component Architecture ◆ Testing Discussions ◆ Testing Considerations ◆ Bibliography
developed that hit a database server directly, or on behalf of actual user activities, via browser-submitted transactions. Regardless, we need to have a firm grasp of typical Web-based application architecture at the component level if we are to know what types of errors to look for and what questions to ask.
Introduction This chapter explores the software components of a typical Web-based system— from client-based components on the front end (such as Web browsers, plugins, and embedded objects) to server-side components on the back end (such as application server components, database applications, third-party modules, and cross-component communication). It offers insight to what typically happens when users click buttons on browser-based interfaces. It also explores pertinent testing questions such as: ■■
Which types of plug-ins are used by the application under test? What are the testing implications associated with these plug-ins? What issues should be considered during functionality and compatibility testing once these plug-ins have been integrated to the system?
■■
How should the distribution of server-side components affect test design and strategy?
■■
Which Web and database servers are supported by the application? How is Web-to-database connectivity implemented and what are the associated testing implications?
■■
How can testing be partitioned to focus on problematic components?
Overview A Web-based system consists of hardware components, software components, and users. This chapter focuses on the software components of Web-based systems.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 113
Web Application Components
Distributed Application Architecture In a distributed architecture, components are grouped into clusters of related services. Distributed architectures are used for both traditional client-server systems and Internet-based client-server systems.
Traditional Client-Server Systems A database access application typically consists of four elements: 1. User interface (UI) code. The end-user or input/output (I/O) devices interact with this for I/O operations. 2. Business logic code. Applies rules, computes data, and manipulates data. 3. Data-access service code. Handles data retrieval and updates to the database, in addition to sending results back to the client. 4. Data storage. Holds the information.
Thin- versus Thick-Client Systems When the majority of processing is executed on the server-side, a system is considered to be a thin-client system. When the majority of processing is executed on the client-side, a system is considered to be a thick-client system. In a thin-client system (Figure 5.1), the user interface runs on the client host while all other components run on the server host(s). By contrast, in a thickclient system (Figure 5.2), most processing is done on the client-side; the client application handles data processing and applies logic rules to data. The server is responsible only for providing data access features and data storage.
CLIENT
SERVER
User Interface
Logic/Rule Components
Data-Access Components
Data Storage
Figure 5.1 Thin-client system.
113
08 201006 Ch05.qxd
8:58 AM
Page 114
Chapter 5
SERVER
CLIENT User Interface
Logic/Rule Components
Data-Access Components
Data Storage
Figure 5.2 Thick-client system.
Web-Based Client-Server Systems Web-based client-server system components typically can be grouped into three related tiers: (1) User service components (client), (2) business service components (server), and (3) data service components (server). Processing, performance, scalability, and system maintenance are all taken into account in the design of such systems. An example of a three-tiered Web application is shown in Figure 5.3. The components shown in this example are discussed in later sections of this chapter.
SERVER
UI
JDBC
SERVER
ADO/OLE-DB
Script
Data Storage
Services
Web Server
Components
Script
Components
CLIENT
Browser
114
5/29/03
Services/Rules/Logistics
Figure 5.3 Three-tiered Web-based system.
Data
Data Storage
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 115
Web Application Components
SERVER
CLIENT Script
Browser
TCP/IP Network
Services
Web Server
Components
Data Storage Figure 5.4 A Web-based thin client.
Figures 5.4 and 5.5 illustrate thin-client and thick-client Web applications, respectively. In the thin-client example, the server is responsible for all services. After retrieving and processing data, only a plain HTML page is sent back to the client. In contrast, in the thick-client example, components such as ActiveX controls and Java applets, which are required for the client to process data, are hosted and executed on the client machine. Each of these models calls for a different testing strategy.
CLIENT
SERVER
Script
Browser
Script
TCP/IP Network
Components
Services
Web Server
Components
Data Storage Figure 5.5 Web-based thick client.
115
08 201006 Ch05.qxd
116
5/29/03
8:58 AM
Page 116
Chapter 5
In thick-client system testing, tests should focus on performance and compatibility. If Java applets are used, the applets will be sent to the browser with each request (unless the same applet is used within the same instance of the browser). If the applet is a few hundred kilobytes in size, it will take a fair amount of bandwidth to download it with reasonable response time. Although Java applets are, in theory, designed to be platform-independent, they should be tested with various supported browsers because they may have been created with different versions of the software development kit (SDK). Each SDK supports a different set of features. In addition, applets need to be interpreted by a Java Virtual Machine (JVM). Different browsers, on different platforms, with their respective versions, have different built-in JVMs, which may contain bug incompatibilities. With ActiveX controls, the network-specific performance hit should occur only once. There may, however, be incompatibility issues with browsers other than Microsoft Internet Explorer and platforms other than Microsoft Windows. In thin-client systems, incompatibility issues are less of a concern. Performance issues do, however, need to be considered on the server-side, where requests are processed, and on the network where data transfer takes place (sending bitmaps to the browser). The thin-client model is designed to solve incompatibility problems as well as processing power limitations on the client-side (the thin-client model concentrates work on the server). Additionally, it ensures that updates happen immediately, because the updates are applied at that server only. Personal Digital Assistants (PDAs), for example, due to their small size, are not capable of handling much processing (See Chapter 6, “Mobile Web Application Platform,” and Chapter 20, “Testing Mobile Web Applications,” for more information). The thin-client model serves PDAs well because it pushes the work to servers, which perform the processing and return results back to the client (the PDA). Desktop computers (in which the operating systems deliver a lot of power and processing) enable much more processing to be executed locally; therefore, the thick-client approach is commonly employed to improve overall performance.
Software Components A component is any identifiable part of a larger system that provides a specific function or group of related functions. Web-based systems, such as e-business systems, are composed of a number of hardware and software components. Software components are integrated application and third-party modules, servicebased modules, the operating system (and its service-based components), and application services (packaged servers such as Web servers, SQL servers, and their associated service-based components). Component testing is the testing of individual software components, or logical groups of components, in an effort
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 117
Web Application Components TESTING THE SAMPLE APPLICATION To illustrate how functionality implementation can affect testing efforts, consider the metric generation feature of the sample application (see Chapter 8, “Sample Application,” for more information). The sample application enables users to generate bug-report queries that specify search criteria such as bug severity and the names of engineers. Query results are tabulated and ultimately plugged into graphic charts, which are displayed to users. This functionality is implemented by having the user send a query to the Web server (via a Web browser). The Web server in turn submits the query to a database. The database executes the query and returns results. The Web server then sends the resulting data, along with a Java applet or ActiveX control that is to be installed on the client machine. The client-side, after downloading the component, converts the data into a graphically intuitive format for the user. If the downloaded component executes on the client machine, then the system is a thick-client system. If the processing is done on the server (i.e., the Structured Query Language (SQL) server gets results from the database, a GIF graphic is created on the server-side, and the GIF is sent back to the browser), then the system is a thinclient system. These alternate functionality implementations will have different consequences on the testing effort.
to uncover functionality and interoperability problems. Some key software components include operating systems, server-side application service components, client-side application service components, and third-party components.
Operating Systems Operating systems extend their services and functionality to applications. The functionality is often packaged in binary form, such as standard dynamic link libraries (DLLs). When an application needs to access a service, the application does it by calling a predefined application program interface (API) set. In addition, with object-based technology, these components extend their functionality by also exposing events (e.g., when a certain applied event is double-clicked, perform the following action), properties (e.g., when the background color is white and the foreground color is black), and methods (e.g., remove or add a certain entry to the scroll list) for other applications to access.
Application Service Components Server-side packaged servers. A server is a software program that provides services to other software programs from either a local host or a remote host. The hardware box in which a server software program runs is also often referred to as a server. Physical hardware boxes, however, can
117
08 201006 Ch05.qxd
118
5/29/03
8:58 AM
Page 118
Chapter 5
support multiple client programs, so it is more accurate to refer to the software as the server, as opposed to the hardware that supports it. Packaged servers offer their services and extend their functionality to other applications in a manner that is similar to the extended model of operating systems. Two common packaged servers that are used in Web-based systems are Web servers and database servers. Web servers typically store HTML pages that can be sent, or served, to Web clients via browsers. It is common for packaged Web servers to offer functionality that enables applications to facilitate database activities. Such features can be packaged in a binary module such as a DLL. Access to these features is achieved via predefined APIs. See Table 5.1 for examples of server-side service components. Table 5.1 Possible Scenario of Software Component Segmentation APPLICATION SERVICE COMPONENTS
THIRD-PARTY COMPONENTS
Server-side
Java components
Web server
ActiveX controls
Scripting
Standard EXEs
Java VM
Standard DLLs
Database server
CGIs
Data-access service
etc.
Transaction service Client-side Web browser Scripting Java VM Other INTEGRATED APPLICATION COMPONENTS HTML, DHTML, JavaScript, VBScript, JScript, Perl Script, and others Standard EXEs CGIs API-based components Java components ActiveX controls Standard DLLs
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 119
Web Application Components
Client-side services. On the client-side, a typical browser supports a variety of services, including Java VM, which runs Java applets, script interpreters that execute scripts. See Table 5.1 for examples of client-side services.
Third-Party Components Software applications are subdivided into multiple components, otherwise referred to as units or modules. In object-oriented programming and distributed software engineering, components take on another meaning: reusability. Each component offers a template, or self-contained piece to a puzzle that can be assembled with other components, to create other applications. Components can be delivered in two formats: (1) source-based, as in an object-oriented programming class, and (2) binary-based, as in a DLL or Java Archive file format (JAR). Binary-based components are more relevant to the testing concerns discussed in this book.
Integrated Application Components An integrated application consists of a number of components, possibly including a database application running on the server-side, or a Java-based chartgeneration application running on the server-side in an HTML page that is running on the client-side, as shown in Figure 5.6. In the Java applet example shown in this figure, the software component executes within the context of the Web browser, or a container. A container can also be a Web-server-based application, a database application, or any other application that can communicate with the component via a standard interface or protocol. Typically, software components are distributed across different servers on a network. They, in turn, communicate with each other via known interfaces or protocols to access needed services. See Table 5.1 for a sample list of integrated software components.
Dynamic Link Library (DLL) Understanding DLLs and the potential errors that may be associated with them is essential in designing useful test cases. In the early years of software development, the only way that a developer could expose created functionality to another developer was to package the functionality in an object file (.OBJ) or library files (.LIB). This method required the recipient developer to link with the .OBJ or .LIB file. The functionality was therefore locked in with the executable. One of the implications of this approach was that if several executables used the same set of functionality, each executable had to link individually to
119
08 201006 Ch05.qxd
120
5/29/03
8:58 AM
Page 120
Chapter 5
the object. This was repetitive, and the linked code added to the size of the executable file, which resulted in higher memory requirements at runtime. More important, if new versions of the object or library files became available, the new code had to be relinked, which led to the need for much retesting. The dynamic link library was introduced to improve the method of sharing functionality. A DLL is a file that contains functions and resources that are stored separately from, and linked to on demand, by the applications that use them. The operating system maps the DLL into the application’s address space when the application, or another DLL, makes an explicit call to a DLL function. The application then executes the functions in the DLL. Files with .DLL extensions contain functions that are either exported or available to other programs. Multiple applications or components may share the same set of functionality and, therefore, may also share the same DLLs at runtime. If a program or component is linked to a DLL that must be updated, in theory all that needs to be done is to replace the old DLL with the new DLL. Unfortunately, it is not this simple. In certain situations, errors may be introduced with this solution. For example, if a DLL that is referenced in the import library links to a component that is not available, then the application will fail to load. (See the error message example in Figure 5.10.)
Figure 5.6 Java applet.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 121
Web Application Components
Figure 5.7 DLL caller program.
Here is another example. The DLL caller application illustrated in Figure 5.7 is a Visual Basic application. It uses a few functions that are exported by the system DLL named KERNEL32.DLL. After loading the application, clicking the Show Free Memory button displays the current available physical memory. To implement this feature, the code that handles the click event on the Show Free Memory button has to be written. Because there is an exported function named GlobalMemoryStatus, which is available in the Windows system DLL named KERNEL32.DLL, a developer can simply call this function to retrieve the information. The process of using a function in a DLL is illustrated in Figures 5.8 and 5.9. You call the DLL function when there is a click event on the Show Free Memory button.
Data Structure Type MEMORYSTATUS dwLength As Long dwMemoryLoad As Long dwTotalPhys As Long dwAvailPhys As Long dwTotalPageFile As Long dwAvailPageFile As Long dwTotalVirtual As Long dwAvailVirtual As Long End Type Figure 5.8 DLL function declaration.
121
08 201006 Ch05.qxd
122
5/29/03
8:58 AM
Page 122
Chapter 5 Variable Declaration
Sub cmdShowFreeMem _Click () Dim YourMemory As MEMORYSTATUS GlobalMemoryStatus YourMemory lblFreeMem.Caption = ” Available Physical Memory: ” & Format ((YourMemory.dwAvailPhys /1024), ”Fixed”) & ” Kb ” End Sub Function Call
Parameter Passed to Function
Figure 5.9 DLL function call.
Potential DLL-Related Errors Missing required DLL. For example, when the application DLLCALLER .EXE is executed on the developer’s machine, everything works fine. When it is first executed on a system other than the developer’s, however, the error message shown in Figure 5.10 displays. As it turns out, the application was created with Visual Basic 4.0 and depends on the DLL named VB40032.DLL. If that DLL is not installed, the application will not load properly. The application did not complain about KERNEL32.DLL, because it is a system DLL, which is expected to be there. If it were not, even the operating system would not work. API-incompatible DLL. There may be two versions of the same DLL, and if the data type, structure, or number of parameters has been changed from one version to another, an error will result. Other incompatibility issues. One of the benefits of using DLL is that when the author of a DLL needs to change the implementation of a function (to improve performance, for example) but not the API, the change should be transparent to the DLL callers—that is, no problems should result. This is not, however, always the case. You need to test to confirm the compatibility with your application.
Figure 5.10 Error caused by missing DLL.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 123
Web Application Components
N OT E The preceding section is not intended to suggest that you should start testing at the API level, unless you are specifically asked to do so. It is intended to give you enough background information to design powerful test cases that focus on interoperability issues. See the “Testing Considerations” section later in this chapter for more DLL-related issues.
Scripts On the server-side, scripts are often used to convert data from one form to another form, thus using the output from one program to be used by a different program. This is called “glue code.” A simple script to take data from a database and send it to a report writer is a common example. Today this is used extensively in Active Server Page (ASP), a Microsoft technology, and Java Server Page (JSP), a Sun Microsystems technology: data is taken from the Web server and formatted for the user’s browser. More on ASP and JSP later in the chapter. Related to glue code are filters. Filters are scripts (or programs) that remove unwanted data. An example is an e-mail filter that removes or routes messages based on the user’s selection rules. E-mail client applications often contain a scripting language built into the application. Scripts can also be used for many different tasks such as data validation and UI manipulation on the client-side. In Web applications, scripts are used on both the server- and the client-side.
Web Application Component Architecture Generally, Web applications consist of server-side and client-side components, including operating systems, browsers, packaged servers, and other associated software. A sampling of these components, along with their associated testing issues, follows.
Server-Side Components Any computer that provides services to other computers is a server. A single physical computer can house multiple servers (software programs). Servers can also be distributed across multiple physical computers. Testing considerations vary, depending on the number and distribution of servers and other software components associated with a system. Web systems often have several servers included at their back end, allowing users to gain access from a client computer (via a browser) and get the services they need (Web page content or database records). On the hardware side, the
123
08 201006 Ch05.qxd
124
5/29/03
8:58 AM
Page 124
Chapter 5
characteristics that distinguish server host quality are similar to those qualities considered favorable in all computers: high performance, high data throughput, scalability, and reliability. Server operating systems need to be more robust than desktop workstation operating systems. Windows 95 and Windows 98, for example, do not offer the reliability or performance required by most servers. Operating systems such as UNIX, Windows NT, and Windows 2000 Advanced Server offer strong security features and administrator tools, in addition to the scalability and reliability required by servers.
Core Application Service Components Web Servers
Web servers, or HTTP servers, store Web pages or HTML files and their associated contents. Web servers make their contents available to client computers, and are the most essential type of server for Web-based systems. Many software companies develop Web servers: Novel, Netscape, Microsoft, Sun Microsystems, and others. Web servers also serve advanced technology components such as Java servlets, ActiveX controls, and back-end database connectors. Web servers may work with protocols such as FTP and Gopher to pass data back to users. Database Servers
Database servers act as data repositories for Web applications. Most Web systems use relational database servers (RDBSs). Database servers introduce a variety of testing complexities, which are discussed in Chapter 14, “Database Tests.” Prominent database server manufacturers include Microsoft, Oracle, and Sybase. The Structured Query Language (SQL) is the coding language used in relational database management servers (RDBMS). Refer to Chapter 14 for more information regarding SQL and databases. Application Servers
Application server is a term used to refer to a set of components that extend their services to other components (e.g., ASP) or integrated application components, as discussed earlier. Web applications support users by giving them access to data that is stored on database servers. Web applications coordinate the functionality of Web servers and database servers so that users can access database content via a Web browser interface. The sample application provided in Chapter 8 “Sample Application” is a Web-based bug-tracking system. It is an example of an application server that utilizes component-based technologies. See Chapter 8 for more information.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 125
Web Application Components
Markup Language Pages HTML (Hypertext Markup Language) is the standard markup language used in creating Web pages. Similar to HTML is XML (eXtensible Markup Language), which provides a standard way of flexibly describing the data format that enables systems supporting XML to “talk” to each other by sharing the described format and its data. In short, XML defines the guidelines for structuring and formating data. It is used to facilitate the generation and interpretation of data, and to ensure that other XML-compliant Web systems will not unambiguously interpret and use that data. Both XML and HTML contain markup symbols to describe the display and the interaction of contents on a Web page or data file. Where they differ is in HTML; the meaning of the content is dependent of the predefined HTML tags. For example, the predefined symbol signifies that the data following will be displayed with a boldface font. XML, in contrast , is extensible, meaning that, for example, if the word “cardnumber” is placed within the markup tags, and an application programmer has defined the data following “cardnumber” as a 16-digit credit card number, any XML-complied Web system interacting with that application will understand how to interpret that data, display it, store it, or encrypt it. (For information, support tools, mailing lists, and support in languages other than English, go to www.w3.org/XML. For a more in-depth description and additional information, go to www.w3.org/XML/1999/XML-In-10-points.)
XML with SOAP The Simple Object Access Protocol (SOAP) makes it possible for different software applications running on different computers with different operation systems to call and transfer information to each other. SOAP was originally meant for use with decentralized and distributed environments. The protocol uses a combination of HTTP/HTTPS protocols (widely available for use by many operating systems) and XML (the mechanism to describe the data format and data) to facilitate this information exchange. Since SOAP uses HTTP to transfer data, the requests are more likely to get through firewalls that screen out other requests except HTTP. (For more information, go to www .w3.org/TR/SOAP.)
Web-to-Database Connectivity The value of data-access applications is that they allow interaction between users and data. Communication between users, Web servers, and database servers is facilitated by certain extensions and scripting models.
125
08 201006 Ch05.qxd
126
5/29/03
8:58 AM
Page 126
Chapter 5
On the back end, data resides in a database. On the front end, the user is represented by requests sent from the Web server. Therefore, providing connectivity between Web server requests and a database is the key function of Web-based applications. There are several methods that can be employed to establish such connectivity. The most common are Common Gateway Interface- (CGI) based programs with embedded SQL commands, Web server extension-based programs, and Web server extension-based scripts. Common Gateway Interface (CGI)
The CGI is a standard communication protocol that Web servers can use to pass a user’s request to another application and then send the application’s response back to the user. CGI applications allow Web servers to interact with databases, among other things. CGI applications are usually implemented in Practical Extraction and Reporting Language (PERL), although they can be written in other programming languages such as C, C++, and Visual Basic. Once a CGI program has been written, it is placed in a Web server directory called a CGI bin. Web server administrators determine which directories serve as CGI bins. Common Gateway Interface programs must be placed in their correct directories if they are to run properly. This security feature makes it easier to keep track of CGI programs and to prevent outsiders from posting damaging CGI programs. After a CGI program has been placed in a CGI bin, a link to the bin is embedded in a URL on a Web page. When a user clicks the link, the CGI program is launched. The CGI program contacts a database and requests the information that the user has requested. The database sends the information to the CGI program. The CGI program receives the information and translates it into a format that is understandable to the user. This usually involves converting the data into HTML, so that the user can view the information via a Web browser. The main drawback of CGI scripts is that they run as separate executables on Web servers. Each time a user makes a request of a database server by invoking a CGI script, small amounts of system resources are tied up. Though the net effect of running a single CGI script is negligible, consider the effect of 1,000 concurrent users launching even one CGI script simultaneously; the effect of 1,000 simultaneous processes running on a Web server would likely have disastrous consequences to system memory and processing resources. Web Server Extension-Based Programs
An alternate, and sometimes more efficient, means of supplying Web-todatabase connectivity is to integrate with Web server-exported library functions. The Netscape server API, NSAPI, and the Microsoft Internet API for IIS, ISAPI, commonly referred to as NSAPI/ISAPI, can be in-process applications that take
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 127
Web Application Components
advantage of a Web server’s native API. Library functions work off of features and internal structures that are exposed by Web servers to provide different types of functionality, including Web-to-database connectivity. The NSAPI/ISAPI-based applications can be DLLs that run in the same memory space as Web server software. Netscape Server uses NSAPI; Microsoft Internet Information Server uses ISAPI. Both NSAPI and ISAPI effectively offer a similar solution; they are APIs that offer functions in DLL format. These APIs expose the functionality of the Web server software of which they are a part so that required processes can be performed by the server software itself, rather than by a separate executable (such as a CGI script). Web server extension-based applications, although more efficient from a resource perspective, are not always the best choice for invoking Web server functionality. For example, a Web application might be distributed to multiple server platforms, and it often makes sense to write different code for each platform. A CGI script might be written to interface with a UNIX server, whereas NSAPI code might be used to invoke functions on a Netscape server running in the same system. A third server (e.g., Microsoft Internet Information Server (IIS)) might require either a CGI script or ISAPI code. The development of every Web system, as far as Web-to-database connectivity goes, requires a careful balance between tolerable performance levels, compatibility, and perceived effort of execution. A drawback of Web server extension-based applications is that, because they are written in compiled languages such as C, C++, or Visual Basic, they are binary. Whenever changes are made to code—for example, during bug fixing—the code has to be recompiled. This makes remote changes to the code more cumbersome. Furthermore, a scripting language is easier to use and, therefore, many new developers can be trained quickly. Web Server Extension-Based Scripts
Active Server Page (ASP) is a Microsoft technology that allows for the dynamic creation of Web pages using a scripting language. The ASP is a programming environment that provides the capability to combine HTML, scripting, and components into powerful Internet applications. Also, ASP can be used to create Web sites that combine HTML, scripting, and other reusable components. Active Server Page script commands can also be added to HTML pages to create HTML interfaces. In addition, with ASP, business logic can be encapsulated into reusable components that can be called from scripts or other components. ASP scripts typically run on servers, and unlike the binary code model, they do not have to be compiled; therefore, they can be easily copied from distributed software unless encryption measures are undertaken. However, keep in mind that encryption measures add more components and processing requirements to Web servers—not to mention the need for additional testing.
127
08 201006 Ch05.qxd
128
5/29/03
8:58 AM
Page 128
Chapter 5
The ASP scripts interact with the DLL layer through an interpreter (asp.dll). The DLL layer in turn interacts with the ISAPI layer to provide functionality, such as gateway connectivity. An HTML page that contains a link to an ASP file often has the file name suffix of .ASP. Java Server Page (JSP) is a Sun Microsystems technology similar to ASP for the dynamic creation and control of the Web page content or appearance through the use of servlets, small programs that run on the Web server to generate the Web page before it is sent to the requested user. JSP technology is also referred to as the servlet API. Unlike ASP, which is interpreted, JSP calls a Java program (servlet) that is run on the Java Web Server. An HTML page that contains a link to a Java servlet often has the file name suffix of .JSP. ASP/JSP versus CGI ■■
The CGI programs require Web server operating systems to launch additional processes with each user request.
■■
As an in-process component, ASP/JSP can run in the same memory space as Web server applications, eliminating additional resource drain and improving performance.
ASP/JSP versus Web Server Extension-Based Programs ■■
Because NSAPI/ISAPI applications are in-process applications that use a Web server’s native API, they run at a speed comparable to that of ASP.
■■
NSAPI/ISAPI applications must be compiled.
■■
ASP/JSP uses scripting languages.
■■
ASP/JSP is faster to develop and deploy than NSAPI/ISAPI.
Other Application Service Components Search Servers
Often referred to as search engines, search servers catalog and index data that is published by Web servers. Not all Web systems have search servers. Search servers allow users to search for information on Web systems by specifying queries. A query, simply put, is a request to find certain data that has been submitted to a search server by a user. Users submit queries so that they can define the goal and scope of their searches—often specifying multiple search criteria to better refine search results. As new information is introduced into a Web system, search servers update their indices. Robust search servers have the capability to handle large amounts of data and return results quickly, without errors.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 129
Web Application Components Proxy Servers and Firewalls
Proxy servers are sometimes employed by companies to regulate and track Internet usage. They act as intermediaries between networks and the Internet by controlling packet transmissions. Proxy servers can prevent files from entering or leaving networks; they log all traffic between networks and the Internet and speed up the performance of Internet services; and they log IP addresses, URLs, durations of access, and numbers of bytes downloaded. Most corporate Web traffic travels through proxy servers. For instance, when a client computer requests a Web page from the Internet, the client computer contacts the network’s proxy server with the request. The proxy server then contacts the network’s Web server. The Web server sends the Web page to the proxy server, which in turn forwards the page to the client computer. Proxy servers can speed up performance of Internet services by caching data. Caching involves keeping copies of requested data on local servers. Through caching, proxy servers can store commonly viewed Web pages so that subsequent users can access the pages directly from the local server, rather than accessing them at slower speeds over the Internet. Firewalls are shields that protect private networks from Internet intruders; that is, they prevent unauthorized users from accessing confidential information, using network resources, and damaging system hardware, while allowing authorized insiders access to the resources they require. Firewalls are combinations of hardware and software; they make use of routers, servers, and software to shield networks from exposure to the Internet at large. Two common types of firewalls are packet-filtering firewalls (such as routers) and proxy-based firewalls (such as gateways). Chapter 18, “Web Security Testing,” has more information regarding proxy servers and firewalls. Communication-Related Servers
Numerous communication server types are available to facilitate information exchange between users, networks, and the Internet. If a Web system under test includes a remote-access server, e-mail, a bulletin board, or chat feature, then communication server components are present and should be tested. E-Commerce-Related Servers
E-commerce servers provide functionality for retail operations (they are not truly a separate type of server, but rather a specialized use of Web server technologies). Via Web applications, they allow both merchants and customers to access pertinent information through client-side Web browsers. TASKS PERFORMED BY E-COMMERCE SERVERS ■■
Order taking and order processing
■■
Inventory tracking
129
08 201006 Ch05.qxd
130
5/29/03
8:58 AM
Page 130
Chapter 5 ■■
Credit card validation
■■
Account reconciliation
■■
Payment/transaction posting
■■
Customer orders/account information COMMON E-COMMERCE SERVER BRANDS
■■
Ariba
■■
BroadVision
■■
Calico
■■
Vignette
Multimedia-Related Servers
Multimedia servers provide support for high-speed multimedia streaming, enabling users to access live or prerecorded multimedia content. Multimedia servers make it possible for Web servers to provide users with computer-based training (CBT) materials.
Client-Side Components The client-side of a Web system often comprises a wide variety of hardware and software elements. Multiple brand names and product versions may be present in a single system. The heterogeneous nature of hardware, networking elements, operating systems, and software on the client-side can make for challenging testing.
Web Browsers Web browsers are applications that retrieve, assemble, and display Web pages. In the client-server model of the Web, browsers are clients. Browsers request Web pages from Web servers. Web servers then locate requested Web pages and forward them to the browsers, where the pages are assembled and displayed to the user. There are multiple browsers and browser versions available for PCs, Macintosh computers, and UNIX computers. Browsers issue HTML requests (although they can also issue requests for ASP, DHTML, and others). The HTML code instructs browsers how to display Web pages to users. In addition to HTML, browsers can display material created with Java, ActiveX, and scripting languages such as JavaScript and VB Script.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 131
Web Application Components
When Web pages present graphics and sound files, the HTML code of the Web pages themselves does not contain the actual multimedia files. Multimedia files reside independently of HTML code, on multimedia servers. The HTML pages indicate to Web browsers where requested sounds, graphics, and multimedia are located. In the past, browsers required separate applications, known as helper applications, to be launched to handle any file type other than HTML, GIF, and JPEG. Plug-ins, such as RealPlayer and QuickTime, are more popular today. They allow streaming media and other processes to occur directly within browser windows. RealPlayer, by RealNetworks, is a popular streaming sound and video plug-in. Windows Media Player is a sound and video plug-in that is built into Windows operating systems. QuickTime, made by Apple, can play synchronized content on both Macintosh computers and PCs. Newer browsers are bundled with complete suites of Internet applications, including plug-ins, e-mail, utilities, and “what you see is what you get” (WYSIWYG) Web page-authoring tools. Netscape Communicator, of which Netscape Navigator is a component, is such a suite. Internet Explorer 5.x and 6.x allow users to view their entire desktops using HTML; Web links are used to interact with the operating system, and live Web content can be delivered directly to the user desktop.
Add-on/Plug-in Components Additional software may reside on the client-side to support various forms of interactivity and animation within Web pages. Macromedia Shockwave, Java applets and ActiveX controls are examples of such add-on applications. Java, a full-featured object-oriented programming language, can be used to create small applications, known as applets, within Web pages. ActiveX is a Microsoft technology that behaves similarly to both Java applets and plug-ins. ActiveX controls offer functionality to Web pages. Unlike applets, however, they are downloaded and stored on the user’s hard disk and run independently of Web browsers. Microsoft Internet Explorer is the only browser that supports ActiveX controls. Java applets and ActiveX controls can also reside on and be executed from servers. Communication-Related Components
The client-sides of Web systems often contain applications that facilitate various methods of communication. Such applications take advantage of serverbased communication components such as remote-access dial-up, chat (IRC), discussion groups, bulletin boards, and videoconferencing.
N OT E For a discussion on mobile Web application components, see Chapter 6, “Mobile Web Application Platform.”
131
08 201006 Ch05.qxd
132
5/29/03
8:58 AM
Page 132
Chapter 5 USING THE JAVA CONSOLE TO SUPPORT TESTING The Java Console provides a facility by which we can monitor running Web systems. With limited views and functions, the console is designed as a logging and analysis tool to aid the debugging process. System.out and System.err can be logged and displayed in the console window. The logged information is helpful in further analyzing any odd or erroneous behavior. Figure 5.11 shows a browser display of a Web page with a missing Java applet. The message line at the bottom of the window states that the tgchart applet could not be found. Figure 5.12 shows the Java Console window where information is given about what happened; in this case, it reads: # Unable to load archive http://209.24.0.39/ lib/chart360.jar.java.io.IOException.
Figure 5.11 The server fails to locate the tg.chart Java applet.
Figure 5.12 Java console shows the location where the server was looking for chart360.jar.
5/29/03
8:58 AM
Page 133
Web Application Components
To launch the Java Console with Netscape Navigator: 1. From the browser’s menu bar, select Communicator> Tools> Java Console. To launch the Java Console with Internet Explorer on Windows Systems: 1. From the browser’s menu bar, select Tools> Internet Options> Advanced. 2. Scroll to Microsoft VM and check Java console to enable it. 3. Close and restart the browser. To launch the Java Console with Internet Explorer on Macintosh Systems: 1. From the browser menu bar, select Edit> Preferences. 2. Under Web Browser, select Java. 3. Select to enable the following check-boxes: Alert on exception, Log Java output and Log Java exceptions. 4. From the browser menu bar, select View> Java Messages.
Testing Discussion The following component architecture example is useful in illustrating effective testing strategies. Figure 5.13 details the chart generation example that was mentioned earlier in this chapter in the section, “Distributed Application Architecture.” The pseudodesign for the transaction process runs as follows:
CLIENT Java Chart
SERVER Web Server 2 trend. asp
9 Browser
1
data.tmp
7
trend.dll 5
6
plot.dll
8
Java Chart
Data Storage Figure 5.13 Component architecture example.
OLE-DB
4 sp_trend
08 201006 Ch05.qxd
133
08 201006 Ch05.qxd
134
5/29/03
8:58 AM
Page 134
Chapter 5
1. User submits a request for a trend chart that compares daily totals of open bugs with closed bugs over the past five days. 2. Web server requests the file named trend.asp. 3. Trend.dll is called to do some processing work. 4. Trend.dll connects to the database server and calls a stored procedure named sp_trend to pull the requested data. 5. Upon receiving the requested data, trend.dll calls plot.dll and passes the data for calculation and formatting in preparation for drawing the trend chart. 6. The formatted data is then written to a file named data.tmp in commadelimited format. 7. A third-party Java charting component with the file name data.tmp is requested so that a line chart can be drawn. 8. The Java applet is sent to the client; data.tmp is then deleted. 9. The Java applet is loaded into the user’s browser, and a trend chart with the appropriate data is drawn. Based on the program logic and its component architecture, we will analyze this design to determine potential problems. Then, in an effort to expose possible faults and errors, we will design test cases around the potential problems.
N OT E The potential issues and test cases discussed in this section are by no means definitive. They were designed to encourage you to think more about the possibility of errors in component-based systems. They will help you to think beyond black-box testing from the end user’s point of view. Some of the testing issues mentioned in this example are discussed in greater detail in later chapters.
Test-Case Design Analysis Submitting the request. ■■
What happens if the input data is invalid? You want to determine if there is any error-handling code. Hopefully, there is. You will then need to devise test cases that test the error-handling logic, which consist of three parts: (1) error detection, (2) error handling, and (3) error communication. You also want to know if errors are handled on the client-side, the server-side, or both. Each approach has unique implications. You may want to know if error handling is done through scripts or through an
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 135
Web Application Components
embedded component (e.g., if a Java applet or an ActiveX control is used for the input UI). ■■
What happens if there is too much data for the last five days? Look for potential boundary condition errors in the output.
■■
What happens if there is no data for the last five days? Look for potential boundary condition errors in the output.
■■
What happens if there is a firewall in front of the Web server? Look for potential side-effects caused by the firewall, such as dropping or filtering out certain data packets, which would invalidate the request.
trend.asp is requested. ■■
Is the Web server environment properly set up to allow ASP to be executed? The environment can be set up manually by the system administrator or programmatically via an installation program or setup utility. Regardless, if a script is not allowed to execute, trend.asp will fail.
■■
Will the ASP be encrypted? If so, has it been tested in encrypted mode? The application under test may be using third-party technology to encrypt the ASP files. Incompatibility, performance, time-related, and other environment-related issues may affect functionality.
trend.dll is called. ■■
Is trend.dll a standard DLL or a COM-based DLL? If it is a COM-based object, how is it installed and registered? You cannot access a standard DLL directly from within ASP code, so trend.dll must be a COM-based DLL. Trend.dll is registered using the regsvr32 system utility.
■■
What are the exported functions in the DLLs upon which trend.dll depends? Are they all available on the local and remote host(s)? There are numerous errors related to DLLs that should be considered. (See the “Dynamic Link Library” section earlier in this chapter for more information.)
Calling sp_trend. ■■
The application needs to make a connection to the SQL server before it can execute the stored procedure sp_trend on the database. What issues might cause the connection to fail? There are numerous reasons why this process might fail. For example, there may be an error in authentication due to a bad ID, password, or data source name.
135
08 201006 Ch05.qxd
136
5/29/03
8:58 AM
Page 136
Chapter 5 TESTING THE SAMPLE APPLICATION Please see Chapter 8 for details on the sample application. Following is an example of a real bug that was discovered in the testing of the sample application: trend.dll crashed an ISAPI-based DLL that, in turn, generated error messages on the application server console. However, the end user at the client-side received no communication regarding the error. The user was not notified of the error condition.
■■
When an attempt to connect to the database fails, how is the error condition communicated back to the user? The user may receive anything from a cryptic error message to no message at all. What are acceptable standards for the application under test?
■■
Is the stored procedure properly precompiled and stored in the database? This is typically done through the installation procedure. If for some reason the stored procedure is dropped or fails to compile, then it will not be available.
■■
How do you know that the data set returned by the stored procedure is accurate? The chart might be drawn correctly but the data returned by the stored procedure might be incorrect. You need to be able to validate the data. (See Chapter 14 for more information.)
Calling plot.dll. The functions in this DLL are responsible for calculating and formatting the raw data returned by sp_trend in preparation for the Java chart application. ■■
Is data being plotted correctly to the appropriate time intervals (daily, weekly, and monthly)? Based on the user’s request, the data will be grouped into daily, weekly, and monthly periods. This component needs to be thoroughly tested.
■■
Does the intelligence that populates the reports with the appropriate time periods reside in plot.dll or in sp_trend? Based on what was described earlier, some of the logic can be implemented in the stored procedure and should be tested accordingly.
Write data to file data.tmp. ■■
What happens if the directory to which the text file will be written is write-protected? Regardless whether the write-protected directory is a user error or a program error, if data.tmp is not there, the charting feature will not work.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 137
Web Application Components ■■
What happens if plot.dll erroneously generates a corrupt format of the comma-delimited file? The data formatting logic must be thoroughly tested.
■■
What happens if multiple users request the trend chart simultaneously or in quick succession? Multiuser access is what makes the Web application and client-server architectures so powerful, yet this is one of the main sources of errors. Test cases that target multiuser access need to be designed.
Calling the Java charting program. ■■
What happens if a chart program is not found? The Java applet must be physically placed somewhere, and the path name in the code that requests the applet must point to the correct location. If the applet is not found, the charting feature will not work.
■■
What happens if there is a missing cls (class) in a JAR? A JAR file often contains the Java classes that are required for a particular Java application. There is a dependency concept involved with Java classes that are similar to what was described in the “Dynamic Link Library” section earlier in this chapter. If one or more of the required classes are missing, the application will not function.
Sending results back to the client. The Java applet is sent to the browser, along with the data in data.tmp, so that the applet can draw the chart in the browser; data.tmp is then deleted from the server. ■■
What is the minimum bandwidth requirement supported by the application under test? How big is the applet? Is performance acceptable with the minimum bandwidth configuration? Check the overall performance in terms of response time under the minimum requirement configuration. This test should also be executed with multiple users (for example, a million concurrent users, if that is what the application under test claims to support). (See Chapter 19, “Performance Testing,” for more information.)
TESTING THE SAMPLE APPLICATION A hard-to-reproduce bug that resulted in a blank trend chart was discovered during the development of the sample application. It was eventually discovered that the data.tmp file was hard-coded. Whenever more than one user requested the chart simultaneously or in quick succession, the earlier request resulted in incomplete data or data intended for the subsequent request. The application’s developer later designed the file name to be uniquely generated with each request.
137
08 201006 Ch05.qxd
138
5/29/03
8:58 AM
Page 138
Chapter 5 ■■
Has the temp file been properly removed from the server? Each charting request leaves a new file on the server. These files unnecessarily take up space.
Formatting and executing the client-side component. The browser formats the page, loads the Java applet, and displays the trend chart. ■■
Is the applet compatible with all supported browsers and their relative versions? Each browser has its own version of the “sandbox,” or JVM, and does not necessarily have to be compatible with all other browsers. This incompatibility may have an effect on the applet.
■■
What happens when security settings, either on the browser-side or on the network firewall side, prohibit the applet from downloading? Will there be any communication with the user? Look for error conditions and see how they are handled.
Test Partitioning Given the distributed nature of the Web system architecture, it is essential that test partitioning be implemented. For example, at the configuration and compatibility level, if the application under test requires Microsoft IIS 4.0, 5.0, and 6.0, and Microsoft SQL versions 7.0 and 8.0 (MS SQL 2000), then the test matrix for the configuration should look something like this: TESTING THE SAMPLE APPLICATION The sample application utilizes a third-party Java charting component that enables the generation of charts. The component offers numerous user interaction features, so it is a rather large object to be sent to a browser. Because the sample application required only basic charts, the developer decided to remove some of the classes in the jar that were not required by the application. The size of the jar was thereby slimmed down to about half its original size, greatly improving performance. After about a week of testing, in which the testing team had no idea that the number of Java classes had been reduced, the test team discovered a unique condition that required some handling by the applet. The applet, in turn, was looking for the handling code in one of the classes that had been removed. The test team wrote a bug report and subsequently talked to the developer about the issue. The developer explained what he had done and told the test team that they should make sure to test for this type of error in the future. Several test cases that focused on this type of error were subsequently designed. The test team ultimately found five more errors related to this optimization issue.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 139
Web Application Components
TEST CONFIGURATION ID
MS-IIS
MS-SQL
1
4.x
7.0
2
4.x
8.0
3
5.0
7.0
4
5.0
8.0
5
6.0
7.0
6
6.0
8.0
Regarding performance, you might wish to compare SQL 7.0 with SQL 8.0. Such a test matrix would look something like this: TEST CONFIGURATION ID
MS-IIS
MS-SQL
1
Doesn’t Matter
7.0
2
Doesn’t Matter
8.0
Using the sample application’s charting feature as an example (refer back to Figure 5.6), assume that plot.dll has been recompiled with a later version of the compiler; but other than that, not a single line of code has been changed. How can test requirements be determined? Here are a few suggestions: ■■
Reexamine the specific functionality that plot.dll offers and look for error scenarios.
■■
For each potential error scenario, consider the consequences.
■■
Use a utility such as Dependency Walker to determine any new dependencies that plot.dll has and the potential implications of those dependencies.
■■
Examine other components to make sure that trend.dll is the only component using plot.dll.
■■
Focus testing on the creation of data.tmp and the data integrity.
■■
Confine testing to the context of the trend chart features only.
■■
Retest all other functionality.
■■
Retest browser compatibility (the Java applet remains the same, so there is no need to be concerned with its compatibility).
■■
Focus testing on the stored procedure sp_trend (because nothing has changed there).
139
08 201006 Ch05.qxd
8:58 AM
Page 140
Chapter 5
DIFFERENT CONCEPTUAL LEVELS OF PARTITIONING ■■
High-level partitioning. If the goal of testing is to measure server-side response time, then there is no need to run data through the Internet, firewall, proxy servers, and so on. With a load-testing tool (see Chapter 19 for more information), a load generator can be set up to hit the Web server directly and collect the performance data. Figure 5.14 shows an example of high-level partitioning.
■■
Physical-server partitioning. If the goal of testing is to measure per-box performance, then each physical server can be hit independently with a load generator to collect performance data.
■■
Service-based partitioning. If the goal of testing is to test the functionality of the data application and the overall performance of the database server that is providing services to the application, then testing should focus on the database server.
■■
Application/component-based partitioning. The focus of such testing is on the component level (refer to the preceding Java chart-generation tests for examples). The testing here is focused at the component level, as previously described in the charting example.
CLIENT-SIDE
NETWORK
SERVER-SIDE
Operating System
Operating System
Web Browser
Web Server
Client-based Components
Application Server
Operating System TCP/IP Traffic
Figure 5.14 High-level partitioning.
Database
140
5/29/03
SQL Stored Procedures Data
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 141
Web Application Components
Testing Considerations ■■
Determine the server hardware requirements of the system under test. Then generate a matrix of the supported configurations and make sure that these configurations are tested.
■■
Determine the server software component requirements (Web servers, database servers, search servers, proxy servers, communications servers, application servers, e-commerce servers, multimedia servers, etc.) and design interoperability tests to look for errors.
■■
Determine how the server software components are distributed and design interoperability tests to look for errors.
■■
Determine how the server software components interact with one another and design interoperability tests to look for errors.
■■
Determine how the Web-to-database connectivity is implemented (CGI, NSAPI/ISAPI, ASP, or other technologies) and design interoperability tests to look for errors.
■■
Determine the hardware/software compatibility issues and test for those classes of errors.
■■
Determine how the processing is distributed between client and server (thin client versus thick client).
■■
Test partitioning involves testing pieces of a system both individually and in combination. Test partitioning is particularly relevant in the testing of Web systems due to the communication issues involved. Because Web systems involve multiple components, testing them in their entirety is neither an easy nor effective means of uncovering bugs at an early stage.
■■
Design test cases around the identified components that make up the client-side of a Web application, including browser components, static HTML elements, dynamic HTML elements, scripting technologies, component technologies, plug-ins, and so on.
■■
One way of evaluating integration testing and test partitioning is to determine where the components of an application reside and execute. Components may be located on a client machine or on one or more server machines.
141
08 201006 Ch05.qxd
142
5/29/03
8:58 AM
Page 142
Chapter 5
DLL Testing Issues ■■
Use a utility such as Microsoft Dependency Walker to generate a list of DLLs on which the application under test (and its components) depends. For each DLL, determine its version number and where it is located. Determine if the version number is the latest shipping version.
Here is an example of a component-recursive dependency tool, Microsoft Dependency Walker. If the utility is run and DLL.CALLER.EXE is loaded (the example DLL mentioned in the “Dynamic Link Library” section in this chapter), its dependencies will be analyzed (as shown in Figure 5.15). To download Dependency Walker and other related utilities, go to the Microsoft site and search for Dependency Walker, or visit www.dependencywalker.com. A comparable utility called QuickView is available for Windows 9.x and NT systems. To access this utility, right-click on a component that you would like to view and choose QuickView from the context menu list. There are at least four categories of DLLs and components: 1. Operating system-based DLLs. In Windows environments, this includes USER32.DLL, GDI32.DLL, and KERNEL32.DLL. 2. Application service-based DLLs. In Windows environments, this includes ASP.DLL, CTRL3D32.DLL, VB40032.DLL, and so forth. 3. Third-party DLLs. For example, CHART.DLL offers charting functionality to other applications. 4. Company-specific DLLs. For example, Netscape Navigator includes the NSJAVA32.DLL.
Figure 5.15 Component-recursive dependency tool.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 143
Web Application Components
In testing for DLL-related errors, do the following: ■■
Ensure that nonsystem DLLs are properly installed and that their paths are properly set so that they can be found when the components call them.
■■
Look for potential incompatibility errors, such as API incompatibility or functional incompatibility among various versions of the same DLL.
■■
If there are other applications installed on the system that share the same DLL with components, determine how the installation and uninstallation processes will be handled.
■■
Determine what will happen if the DLL is accidentally erased or overwritten by a newer or older version of the same DLL.
■■
Determine what will happen if more than one version of the same DLL coexists on the same machine.
■■
Recognize that explicitly loaded DLLs must be unloaded when applications and processes no longer need them. Typically, this should occur upon the termination of the calling application.
■■
Test with a clean environment (a system with only the operating system installed on it), as well as a dirty environment (a system loaded with common applications).
■■
Decide what to do if a third-party DLL needs certain files that are not available (printer initialization, for example).
■■
With Windows-based applications, consider looking for errors related to the creation and removal of DLL keys during installation and uninstallation.
Script Testing Issues Characteristics of a Script An interpreted language is parsed and executed directly from the source code each time it is executed. The parsing is done on a statement-by-statement basis; syntax error detection occurs only when the statement is about to be executed. Different interpreters for the same language will find different errors. Each interpeter will also execute the script in a slightly different manner. Thus, running Perl script may produce different results when executed on different UNIX hardware. This same script may also produce different results on a Windows NT or Macintosh system. This means that a script needs to be tested on every system on which it will be deployed.
143
08 201006 Ch05.qxd
144
5/29/03
8:58 AM
Page 144
Chapter 5
There are several advantages to using scripted languages, which make them an ideal tool to solve many common problems. Scripts are fast to write. This makes them a valuable tool. Scripting languages usually provide features, such as automatic string-to-number conversion, which makes coding many functions quicker and easier.
Use of Scripts in Web Applications Today’s applications contain increasing amounts of scripted code, both on the server- and on the client-side computer. Sometimes this code is only a couple of lines placed in an HTML Web page, as shown here: <SCRIPT LANGUAGE=”JavaScript”> bla...
Other times the code is very complex. Web severs allow scripts to process HTML forms, provide cookie support, sort and format data, and produce reports. Or the code can appear to be simple but hide complexity and potential problems; For example, here is a common use of scripts in a web page: <SCRIPT LANGUAGE=”JavaScript”> var supported = navigator.userAgent.indexOf(“Mozilla”)==0 && navigator.userAgent.substring(8,9) >= 4; function openWindow() { if(supported) {...} }
The first line of HTML tells the browser that JavaScript follows. The next two lines test if the browser is Mozilla- (Netscape) compliant and if it is version 4.0 or higher. It does so by using the navigator object, and creates a variable named supported. These lines set the Boolean variable (supported); they contain a logical operation (&&) and the string compare (.substring(8,9) > = 4), which tests the ninth character of the string to see if the version of Mozilla is greater than or equal to 4.0. This defines a function call (openWindow()), and starts a branch (if). The if branch could follow as if the supported variable were true, meaning Mozilla-compliant and greater than version 4.0 display or as an HTML page in a certain way, or display the page in another way. This code occurred in a company’s homepage. It was considered simple, straightforward, and didn’t require testing.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 145
Web Application Components
Interpreted programs can be several hundred lines of code, as when the page will set, update, and check Web page cookies. For examples look at the source code of http://my.yahoo.com. To view the source of an HTML page, choose the View pull-down menu and choose Source (IE) or Page Source (NS). JavaScript is not the only interpreted language; nor is the use of interpreted languages limited to the client-side of Web applications. Many scripts, written for the server-side to process HTML forms or communicate with legacy or back-office systems, are written in Perl, sed, or Tcl. SQL statements are used to request information from relational databases (see Chapter 14). These statements are generated on-the-fly and incorporate data generated by end users. For example a script can grab a piece of data from an HTML page and dynamically insert it to an SQL query.
Testing Scripts in Web Applications Scripts can appear in many different places. Scripting languages are commonly used on the client-side in browsers. Server-side scripts are used to start, stop, and monitor Web servers, application servers, databases, mail servers, and many other applications. Scripts are used to move data, transform data, and clean up file system directories. Scripts are a part of installation procedures, daily application maintenance, and the uninstall process.
Coding-Related Problems A script can have all the coding problems of any computer language, hence needs to be tested just like all other code. Testing interpreted code raises additional concerns over testing compiled code, as there is no compiler to check the syntax of the program. Therefore, to find syntax problems, the code must be executed. You should test 100 percent of the code, that is, every line of the code. Testing every line of a script is laborious, and very few commercial tools provide line or branch coverage for scripting languages. The most common coding problem will be syntax errors. Syntax errors are caused when the programmer forgets to include a line terminator or some other minor formatting requirement of the language. Many of these syntax errors will cause the program to stop execution and display a message of the form shown in Figure 5.16. In this case, the proper use of an apostrophe in the text of the message inadvertently caused a fatal syntax error. Many scripts contain hard-coded data as part of the program, rather than keeping data external to the executable code. Often this means that as the conditions change the code will stop working or start behaving in unexpected ways.
145
08 201006 Ch05.qxd
146
5/29/03
8:58 AM
Page 146
Chapter 5
Figure 5.16 Example of client-side script error.Hard-Coded Problems.
The following sample of code exhibits hard-coding of data into the program. If the name “Mozilla” ever changes or the location of the release value (8,9) changes, tracking down all possible occurrences will be difficult. <SCRIPT LANGUAGE=”JavaScript”> var supported = navigator.userAgent.indexOf(“Mozilla”)==0 && navigator.userAgent.substring(8,9) >= 4; function openWindow() { if(supported) {...} }
Moreover, users will probably get new versions of the browser before the developers and testers become aware that the new browser is available. With new browsers constantly being released, testing this code has to become an ongoing activity. This is a good opportunity for test automation. Hard-coding and dependence on data formats may cause problems to occur at unexpected times. One example is awk, a UNIX utility used to process records. Piping, or sending, the output of the ls command into awk is a common way to sort and select file system data. The ls command lists the files, subdirectories, and their contents of the current directory. However, the ls command returns a slightly different organization of information depending on the operating system. This means that migrating the application to a new UNIX machine may cause failures. The symptom might be a crash or getting a text field instead of a numeric field. The symptom might be subtle, such as returning the wrong value. Testing scripts also requires checking each supported platform for data integrity.
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 147
Web Application Components
Script Configuration Testing Each server type will have a different script interpreter based on the manufacturer. Different interpreters will handle scripts differently. Script testing requires configuration testing as much as browser configuration testing. For a discussion of configuration and compatibility testing see Chapter 17 “Configuration and Compatibility Tests.”
Bibliography Binder, Robert V. Testing Object-Oriented Systems: Models, Patterns, and Tools. Reading, WA: Addison Wesley Longman, 2000. LogiGear Corporation. QA Training Handbook: Testing Web Applications. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Testing Windows Desktop and Server-Based Applications. Foster City, CA: LogiGear Corporation, 2003. Orfali, Robert, Dan Harkey, Jeri Edwards. Client/Server Survival Guide, 3rd Ed. New York: John Wiley & Sons, Inc., 1999. Reilly, Douglas J. Inside Server-Based Applications. Redmond, WA: Microsoft Press, 2000.
147
08 201006 Ch05.qxd
5/29/03
8:58 AM
Page 148
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 149
CHAPTER
6 Mobile Web Application Platform
Why Read This Chapter? In the previous chapter we discussed the differences between traditional software systems and Web systems. Those differences, in turn, call for new testing techniques and processes. Mobile Web applications present a new set of challenges in which new types of clients beyond the desktop PCs are introduced. It’s also essential to examine the various pieces that make up the Web application systems deployed on mobile devices, such as Personal Digital Assistant TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ What Is a Mobile Web Application? ◆ Various Types of Mobile Web Client ◆ WAP-Based Phones ◆ Mobile Web Application Platform Test Planning Issues ◆ The Device Technology Converging Game: Who Is the Winner? ◆ Bibliography and Additional Resources
149
09 201006 Ch06.qxd
150
5/29/03
8:58 AM
Page 150
Chapter 6
(PDA) devices and Web-enabled wireless phones. With this knowledge, we can begin to understand how mobile Web applications are different from desktop Web applications, the limits of testing, and how to focus our testing resources on well-identified areas of risks.
Introduction This chapter presents the mobile device platform and shows how it applies to mobile Web systems, which can be wired or wireless. It explores the technological similarities as well as differences between a desktop Web system and mobile Web system. Testing considerations that are suited to mobile Web platforms are also discussed. More testing methods will be covered in Chapter 20, “Testing Mobile Web Applications.” Although many of the traditional software testing practices can be applied to the testing of wired and wireless mobile Web application, there are numerous technical issues that are specific to mobile platforms that need to be considered. In the same way that other chapters are not about testing a PC and its operating system, this chapter is not about testing the handheld or mobile device (meaning its hardware and embedded operating system). It is about testing the Web applications running in a microbrowser and about the accompanying issues in their connection to the Web.
What Is a Mobile Web Application? A mobile Web application is a Web-based application with one key difference: Standard Web-based applications typically run on a desktop PC as the client; mobile Web-based applications run on a mobile device as the client. Currently, the common mobile devices include wireless phones, PDAs, and smart phones (that is, wireless phone/PDA combos). The main difference between a mobile and a desktop Web application is the size of a typical mobile device; the browser running on it, which interprets the Web-based content, has many restrictions that do not exist in a typical desktop PC. One of the implications is the Web content; the UI must be compressed to accommodate the size restrictions imposed by the mobile device. Often, a lightweight version of the Web content and UI must be created for the mobile client.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 151
Mobile Web Application Platform
Various Types of Mobile Web Client Generally, we can group mobile devices into five categories: PDAs, handheld computers, mobile phones, messaging products, and specialized devices. In addition, we will also touch on i-Mode device. At the time of this writing, i-Mode is a very popular device in Japan, but its market share is expanding to other countries. We expect that it will be available in the United States some time in the near future.
N OT E
The messaging products are based on technologies such as SMS (Short Messaging Service), which is an inexpensive method for communicating through text messages of up to 160 characters each. Many of the mobile devices today also have the messaging capability built in. Because we are primarily interested in the browser-based applications, we will exclude messaging products from this chapter.
Palm-Sized PDA Devices PDAs are palm-sized electronic organizers that can run software to connect to the Web using, for example, a landline modem; this is similar to the way a desktop computer has software to connect to the Web using a modem, or increasingly, a wireless modem. These devices are known by many different names, including Palm, PalmPilot, Palm PC, Pocket PC, and handheld organizers, to mention a few. Figure 6.1 shows examples of three PDA devices available on the market today. By default, a PDA will offer a full suite of Personal Information Management (PIM) applications, which should include a date book, an address book, a to-do list, and a notebook. These PDAs may have Web-enabling capability via a wired or wireless connection to the Web. There are two ways to get connected to the Web: Download during data synchronization or connect through a modem.
Figure 6.1 Examples of Palm-sized PDA devices.
151
09 201006 Ch06.qxd
152
5/29/03
8:58 AM
Page 152
Chapter 6
Data Synchronizing An essential feature contributing to the great success of the PalmPilot is the capability to easily synchronize data between the handheld device and the desktop, and vice versa. The Palm HotSync Manager is designed to do twoway data synchronization between the device and the desktop (commonly referred to as conduit software), in this case using functions exported from a number of dynamic link libraries (DLL). HotSync Manager is the conduit manager. Each data type, whether an address book, calendar, or Web content, needs its own conduit to handle synchronization of that specific data type. To sync your data to or from your PDA to your desktop, you need HotSync Manager on your desktop PC. Then you connect the PDA to your PC via a serial or USB cable, a modem, infrared radiation (IR) or wireless connection. When a HotSync operation is initiated, the sync software compares the record on the PDA to the one stored on your PC and accepts the most recent one (in most cases; these rules can be determined via the HotSync Manager.) Although they might have different names, most PDA devices have a synchronization feature similar to that of HotSync Manager for data transfer between the device and the PC. This feature becomes an essential element for offline Web browsing capability, which we will discuss later in this chapter. Depending on the type of application that you are testing, it will dictate how much testing you will need to focus on data synchronization. For example, if you are testing a mobile Web application that will be used in both the online and offline scenarios, then you will need to consider how the offline scenario will affect data updating between the client and the PC and/or server-side. If you are testing a conduit or a mobile application that includes a data conduit, then your testing should focus more on data integrity, based on various data conditions between the device and the PC and/or the server-side.
Web Connectivity It’s also worth mentioning that in the early days of PDA development and manufacturing, a PDA was considered a wired device, meaning that, in order to make a connection to the Internet, you needed to connect to a data line via a modem. Nowadays, PDA devices can connect to the network or Internet via a wireless card or modem connection, an IR connection, or mobile phone connection, in addition to a landline phone connection. Figure 6.2 shows examples of a PDA connecting to the Internet through a wireless and wired connection. (For more information on wired connectivity, refer back to Chapter 4, “Networking Basics”; wireless networking is discussed more fully later in this chapter.) Support for wireless connectivity means you will need to take several wireless-specific issues into your testing considerations; these include security issues, a higher probability of losing connectivity when moving about while staying connected, and lower bandwidth availability for data transferring.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 153
Mobile Web Application Platform
Wireless Network PDA
Intranet
Modem Gateway
PDA
AppWeb Server
Modem
Internet
Workstation
AppWeb Server
Figure 6.2 Wired and wireless Internet connection.
Various Types of Palm-Sized PDA Devices One way to differentiate the different flavors of PDA devices is to identify the operating system embedded in the device. At the time of this writing, there are four major operating system players: ■■
Palm OS, produced by Palm, Inc.
■■
Windows CE, produced by Microsoft
■■
BlackBerry, produced by Research in Motion
■■
EPOC, produced by Symbian
Palm (which at the time of this writing had split into two companies, Palm Computing and Palm Source) not only produces the operating system but also manufactures the PalmPilot devices that have the OS and basic applications already embedded. Several major manufacturers also license the operating systems from Palm then build their PDA devices around the licensed software. Two examples in the commercial space are Handspring’s Visor and Sony’s CLIE devices. Symbol Technologies is another example of a manufacturer that licenses the Palm OS to build PDA devices that target vertical markets (e.g., health care, transportation, and fulfillment). Palm OS is a strong leader in the commercial or consumer space. One of the reasons for its success is its
153
09 201006 Ch06.qxd
154
5/29/03
8:58 AM
Page 154
Chapter 6
simplicity. At least in its early days, Palm wanted to design a device that was small (pocket size), inexpensive, and power-efficient, and that performed its few tasks very well. For those reasons a powerful processor, multitasking, color display, and standard applications normally used on the desktop were all considered unnecessary. Many other devices are built with Microsoft Windows CE operating systems. Examples of these include Casio’s Cassiopeia, Compaq’s iPAQ, and HP’s Jornada. In contrast to Palm’s designs, Window CE is designed to leverage users’ familiarity with the Windows operating system and applications. It is more like migrating the desktop environment to the palm-size computer, extending its normal use. Therefore, navigating in a Windows CE environment is less natural than in the Palm environment. At the time of this writing, Windows CE devices support multithreaded applications, enabling more powerful applications to multitask on the device. On the flip side, it means that the devices consume more power and therefore demand more processing power. Symbian’s EPOC-based devices comprise another set of PDA devices built around an operating system other than Palm’s or Microsoft’s. Symbian is owned by a joint venture that includes Ericsson, Matsushita (Panasonic), Motorola, Nokia, Psion, Siemens and Sony Ericsson. The operating system was originally developed by Psion, therefore all PDA devices manufactured by Psion used this operating system. A few examples include the Psion Revo, Psion Series 5, and Psion Series 5mx. Today, there are many other PDAs produced by the joint venture owners. Examples include the Ericsson R380 and Nokia 7650 smart phones. Finally, Research in Motion’s (RIM) BlackBerry OS-based devices are keyboard-based PDAs that also have support for wireless connectivity and for Web browsing. Examples of devices produced by RIM include RIM 957 and BlackBerry 5810.
Handheld PCs As illustrated in Figure 6.3, handheld PCs are trimmed-down and lightweight versions of a laptop. They often look like miniature laptops and are tiny enough to carry around in your hand; their keyboards are big enough to make typing functional, although not necessarily natural. The two major operating system players in the handheld PC market are: Microsoft with Windows CE, Windows for Handheld PC, and Symbian with EPOC. Windows CE or Windows for Handheld PC are the trimmed-down versions of the standard operating systems. Some people prefer using a laptop instead of the handheld PC since laptops are getting smaller and lighter; and they use the standard version of the OS, which offers full capabilities with lots of applications available.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 155
Mobile Web Application Platform
Figure 6.3 Handheld PC example.
WAP-Based Phones WAP, an acronym for Wireless Application Protocol, is the technology used to connect mobile devices such as mobile phones to the Web by transforming the information on the Web so that it can be displayed more effectively on the much smaller screen of a mobile phone or other mobile device. Figure 6.4 shows an example of a WAP-based phone. Recall that in Chapter 2, “Web Testing versus Traditional Testing,” we discussed various flavors of the client-server configurations for which the client is assumed to be a desktop PC. WAP-based systems work much like a clientserver system with one major difference: The client is a mobile device such as a mobile phone. To make this WAP-based client system work, there are two other elements involved. First, there must be an intermediary server, called a WAP gateway, sitting between the Web server and the client. The WAP gateway is responsible for converting client requests from a mobile device to regular Web requests, and in turn, converting the Web response information from the Web server back to the format that can be transmitted to and displayed on the mobile device or a client. In addition, the mobile device such as the mobile phone must be WAP-enabled; that is, the embedded software used to turn the device into a WAP client must exist on the device. Figure 6.5 illustrates a WAP-enabled phone connected to a Web server via a WAP gateway. (There are several phone manufacturers that produce WAP-enabled phones, including Ericsson, Hitachi, Motorola, Nokia, and Sanyo, to name a few.) An obvious physical difference between a WAPenabled phone and a regular mobile phone is the size of its screen, which is normally larger, enabling more display real estate for Web content.
155
09 201006 Ch06.qxd
156
5/29/03
8:58 AM
Page 156
Chapter 6
Figure 6.4 Example of WAP-based phone.
Second, to make your Web pages viewable by WAP-enabled devices, the HTML pages must be converted into either HDML (Handheld Device Markup Language) or WML (Wireless Markup Language) format. HDML is a simplified version of HTML designed to reduce the complexity of a regular HTML page, which can be large, hence time-consuming to transfer over the wireless network with a bandwidth that is rather limited. WML is similar to HDML except it is based on XML (eXtensible Markup Language), enabling the use of many commercially available XML tools to generate, parse, and manipulate WML. WML can also use XSL (eXtensible Stylesheet Language) and XSLT (XSL Transformations) to construct WML decks from XML metalanguages. A WML deck is a collection of WML cards, each of which is a single block of WML code containing text or navigation items, which is part of the interface for a WML-based application. Another difference is that HDML does not allow scripting, while WML supports its own version of JavaScript, called WMLScript, enabling client-side scripting. Unlike JavaScript in HTML, however, WMLScript is treated much like graphics in HTML; that is, the script files must be saved separately from the WML files. Converting HTML pages to WML pages can be done using conversion tools (which saves time), or manually (which is more elegant because this process takes usability into consideration). WML pages also take into consideration that the screen size is smaller, therefore pages with large graphics and a lot of scrolling text are not easy to view on the mobile devices.
Wireless Network
WAP-Enabled Phone
Intranet
WAP Gateway
Figure 6.5 WAP-based client-server architecture.
AppWeb Server
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 157
Mobile Web Application Platform
N OT E More information on WAP and WML can be found on the Mobile Links page of www.qacity.com, which contains links including www.wapforum.com, www.nokia.com, and www.motorola.com, and offers additional reference information.
i-Mode Devices In 1999, NTT DoKoMo the leading telecommunications operator in Japan, launched a new service called i-Mode (short for information-mode). It has become very popular with more than 20 million subscribers today, and still growing. The i-Mode technology is similar to WAP technology, but instead of using WML contents on the server-side, i-Mode pages are produced with cHTML (Compact HTML), a well-defined subset of standard HTML, which is designed for small information appliances. Similar to WAP, i-Mode clientserver architecture requires a gateway to translate wireless requests from a mobile phone to the Web server, and vice versa. A typical i-Mode client is an i-Mode-enabled phone, which is a regular wireless phone with a larger screen (on average, between 96×90 pixels to 96×255 pixels), often supporting up to 16-bit color. Like a WAP-based phone, it has an embedded microbrowser for browsing i-Mode sites that are tagged in cHTML. There are thousands of i-Mode-compatible Web sites on the Internet today. Another characteristic of i-Mode phone and service is that it’s always on. However, since data transmission is packet-based (similar to TCP/IP), users are charged by the size of data transmitted, not by the connection duration. That means i-Mode users can enjoy a variety of typical Internet activities, such as Web browsing, making travel arrangements, buying movie tickets, getting driving directions, e-mailing, video-phoning, and so on. At the time of this writing, i-Mode services and devices are not yet available outside Japan, though they are slowly being introduced in Europe and, hopefully, in the United States in the near future. This is important, because i-Mode phones use a microbrowser to interpret cHTML tags; and as it turns out, cHTML-based browsers are also used in many other types of small information appliances such as wireless PDAs and wearable communicating devices. Since the i-Mode technology has applications that cross many types of devices, it may become more common, hence, it is important to study. It is in our interest to understand the different types of browser markup languages to plan for our test coverage more adequately.
Smart Phones or Mobile Phone/PDA Combos A smart phone is a wireless phone with extended capabilities that enable access to digital data. Think of a smart phone as a mobile device that combines wireless phone features together with PDA features. This means that it can handle wireless phone calls; support access to the Internet; send and receive voicemails, e-mails, and faxes, usually through unified messaging or UMS (Unified Messaging
157
09 201006 Ch06.qxd
158
5/29/03
8:58 AM
Page 158
Chapter 6
System), similar to a Web-enabled phone; and manage personal information, just like a PDA. Other features that a smart phone might offer include LAN connectivity, pen-style data-entry method, wired or wireless data transfer between phone sets and computers through synchronization, remote access to desktop computers, and remote control of electronic systems at home or in the office. In sum, a smart phone is designed to offer similar functionality as a data-enabled phone or i-Mode phone, with these differences: a smart phone has a larger screen display, touch-screen support, more CPU power (relative to the WAP-based phone), more RAM, and more secondary storage (static RAM or flash memory). This raises the question: Is a smart phone a cellular phone with PDA capabilities, or a PDA with cellular phone capabilities? As the industry evolves, it is really hard to say, and, ultimately, it may be irrelevant. Consider that Handspring, Inc. introduced the VisorPhone Springboard module to add wireless mobile phones and Internet/network connectivity to its PDA device. Shortly after that, the Treo series was introduced as the company’s new line of products called Communicators that have both PDA and wireless phone and data connectivity built in. Kyocera licensed the Palm OS from Palm to build its series of smart phones by adding the PDA capabilities to its wireless phones. (Figure 6.6 shows examples of smart phones produced by Handspring and Kyocera, respectively.) These represent efforts to bring PDA functionality to wireless phones by adding larger displays. But regardless of which direction these producers take, their goal is the same: to combine the mobile phone and PDA functionality. It’s also essential to point out that there are devices built with operating systems other than Palm OS. As mentioned earlier, two other major players are Symbian EPOC and Microsoft Windows CE. Note that the wireless phone can act as a wireless modem to a PDA, enabling wireless network connectivity. A PDA with a phone is like a PDA with connectivity.
Figure 6.6 Examples of smart phones.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 159
Mobile Web Application Platform
It is fascinating to watch the race among developing technologies, and to monitor market acceptance for the most functional cellular smart phone. However, it is not our job to be concerned with which company ultimately wins this race. Our job is to continue learning the evolving technologies. By equipping ourselves with adequate knowledge, we will be in a better position to offer the best testing strategies to product teams.
Mobile Web Application Platform Test Planning Issues In addition to learning about the various mobile device platforms available, it’s also useful to examine another set of variables that influence testing strategies: ■■
Various types of microbrowsers embedded in the devices
■■
Hardware and software attributes
■■
Wireless networks that deliver the mobile Web application functionality and data
■■
Service platforms and support infrastructure needed by mobile Web applications
Microbrowsers Like standard desktop browsers, a microbrowser is designed to submit user requests, receive and interpret results, and display the data received on the screen, enabling users to browse the Web on palm-sized or handheld devices. The key difference is that a microbrowser is optimized to run in the low-memory, lower-power CPU and small-screen environment, such as those presented in PDAs, WAP-based phones, and other smart phones and handheld devices. In comparison to a desktop PC browser, a microbrowser might have either much less-sophisticated graphics or no graphics support at all. We can classify microbrowsers using the following categories: ■■
Text-only browsers
■■
WAP-based browsers that support HDML, WML, or both
■■
Palm OS-based Web Clipping Application that supports PQA-based (Palm Query Application) Web pages and standard HTML pages
■■
AvantGO browser that supports its Web Channel formatted pages
■■
i-Mode-based Web browser that supports cHTML pages
■■
Browsers that support standard HTML
159
09 201006 Ch06.qxd
160
5/29/03
8:58 AM
Page 160
Chapter 6
Let’s examine how the relationship between Web site content and Web browsers might affect the output. Figure 6.7 displays a Web page from www .imodecentral.com. Note that the page has support for both HTML and cHTML, but not WML. Four Web browsers used in this example are: ■■
Window Internet Explorer browser (HTML).
■■
Palm OS-based browser Blazer by Handspring (HTML/WML/cHTML) runs in an emulator.
■■
PIXO Internet Microbrowser (cHTML/HTML) runs in an emulator.
■■
Windows WinWAP browser (WML).
In this case, the WinWAP browser has problems displaying the content formatted for HTML because it supports WML only. The PIXO Internet Microbrowser displays the cHTML version of the Web page correctly. Blazer and Internet Explorer display the HTML version of the Web page correctly. Figure 6.8 shows the OpenWave.com home page, which has support for both HTML and WML (using the same set of browsers as in Figure 6.7). Note that WinWAP displays the WML version of the content. Blazer displays the WML version of the content instead of HTML. Internet Explore and PIXO display the HTML version of the content. Finally, Figure 6.9 shows the PIXO i-Mode browser displaying the same HTML content with the Images option turned off.
Figure 6.7 Example of a Web site with support for HTML and cHTML.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 161
Mobile Web Application Platform
Figure 6.8 Example of a Web site with support for HTML and WML.
Web Clipping Application: How Does It Work? Because wireless bandwidth is limited (see Table 6.2 for a sample of various wireless network transfer rates), waiting for a normal Web page to download can be a disappointing experience. Furthermore, the typical screen of a PDA device is so small that it is not efficient for displaying regular Web pages.
Figure 6.9 i-Mode browser displaying HTML content with the Images option turned off.
161
09 201006 Ch06.qxd
162
5/29/03
8:58 AM
Page 162
Chapter 6
Figure 6.10 Examples of PQAs, Palm query applications.
Palm’s solution to this problem was to specifically design Web pages for display on the smaller screens and then to enable them to be preloaded on the device via the HotSync process. These tiny Web pages are templates with complete layouts, form input and output, UI elements, and so on, but no actual data. So, when you need up-to-date information, you request the Web page, and the device sends the query to the Web Clipping Proxy server. In turn, only the actual requested information of that page is sent back to the device. The rest of the static elements of the page, such as images, animation, templates, and so on, already reside on the device.
Figure 6.11 Searching, selecting, and downloading a PQA.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 163
Mobile Web Application Platform Installed PQA
Figure 6.12
Download complete, PQA icon installed, and PQA launched.
This scheme enables a huge reduction of data transferred across the network. The process is referred to as Web clipping, and the tiny Web pages are called Palm query applications, PQAs, or Web Clipping Applications. The PQAs are HTML pages that have been processed through a program called PQABuilder, which converts the HTML and graphics into PQAs. Examples of two different PQAs are shown in Figure 6.10. (More information on Web Clipping can be found at these Palm Web sites: www.palmos.com/dev/tech/webclipping and www.palmos.com/dev/support/docs/webclipping/.) Predesigned Web Clipping Applications are available for download and installation on your PalmPilot. Following is an example of how a predesigned PQA can be located, downloaded, and used. Figure 6.11 shows the searching and downloading steps. Continuing from Figure 6.11, Figure 6.12 shows the Starbucks PQA displayed as an Application icon, and how the PQA can be used to locate the nearest store by entering the address information.
Handheld Device Hardware Restrictions Some of the hardware restrictions that make development and testing of mobile Web applications different from desktop PC Web applications include: ■■
■■
Small screen display ■■
Limited screen resolutions
■■
Grayscale display (although color display is becoming more widely available)
Low-power CPU
163
09 201006 Ch06.qxd
164
5/29/03
8:58 AM
Page 164
Chapter 6 ■■
Limited RAM
■■
Restricted input method (typically, several control and number buttons)
■■
Restricted character fonts (typically, support for only a single font)
■■
No secondary storage
Table 6.1 itemizes the differences among samples of various types of devices. The characteristics of these devices demand specific considerations in designing Web applications for display in the restricted environments, beyond markup language incompatibility issues. For example, a large image will not be useful when it displays on a small screen; add-in modules such as plug-ins might not be supported. (In Chapter 20, we will discuss several other testing issues surrounding these limitations.)
Software-Related Issues Operating system differences. As discussed earlier, each mobile device will adopt a certain operating system to be part of its platform, whether it’s Palm OS, Windows-based OS, EPOC, or any other flavor. Each operating system has its own feature-rich offerings, as well as certain limitations. These factors will affect the overall capabilities of the device. Device-specific microbrowser. Mobile devices that support Web browsing capabilities often supply a device-specific default browser. The default browser is in general designed to interpret the target Web content whether it’s Web Clipping, WAP-based DHML or WML, cHTML, or just standard HTML. Installed microbrowsers. If the device supports installed microbrowsers, there are several commercial microbrowsers available on the market for a particular device. Supported Web server content. In addition to interpreting markup language content, certain browsers may support additional functionality such as JavaScript, cookies, SSL, WTLS (Wireless Transport Layer Security), and so on. Online and offline. Due to bandwidth limitations of the current wireless network, it is useful to be able to cache Web content by storing it on a secondary storage device so that users can read news or browse site content offline. This content downloading can be done via the data synchronization process. During the sync operation, the desktop PC can be connected to a wired network, which often has much higher bandwidth that speeds download Web content to the device, by just a simple push of a button.
Wireless Network Issues As in any client-server system there are three players: the client, the server, and the network. In the case of mobile Web application, the client happens to be a palm-size or handheld device. In order to support this mobility, a wireless network is needed, in addition to the wired network already in place. In this discussion, we will touch on the wireless network installed and managed by leading operators in the telecommunication industry, as well provide an overview of LAN-based wireless technologies. If you are testing the mobile device itself, which consists of the software and hardware, and their interactions with the network supported in certain localities, there is little you can do to test the interactions unless you are in the locality where the application will be deployed. However, if you are testing locality-independent Web applications, you do not have to worry about network incompatibility issues. The main testing issue you should take into consideration is the bandwidth (or lack thereof) and how it affects the performance and behavior of your mobile Web applications. Generally, testing with various networks, such as for WAP-based phones, is only concerned with the WAP-gateway compatibility and, in some cases, compatibility with the local networks such as GSM and CDMA. Nevertheless, a brief discussion is in order on the wireless network technologies used today in various parts of the globe (see Table 6.2).
Wireless Network Standards One way to help understand the evolution of wireless technology is to briefly examine the different network standards and learn what the generation designations (1G, 2G, 3G, and 4G) mean. 1G
As illustrated in Table 6.2, AMPS stands for Advanced Mobile Phone Service. AMPS, introduced in the early 1980s, became, and currently still is, the most widely deployed cellular or wireless system in the United States. AMPS is considered the first generation of wireless technology designed for voice transfer or 1G. 2G
The second-generation, or 2G, protocols added digital encoding capability, which is support for limited data communications, including fax, Short Messaging Services (SMS), and various levels of encryption and security. Examples of 2G protocols include GSM (Global System for Mobile Communications), TDMA (Time Division Multiple Access), and CDMA (Code Division Multiple Access). GSM is the most widely used system; in fact, it is the primary wireless telephone standard in Europe. GSM now has been deployed in close to 150 countries in the world.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 167
Mobile Web Application Platform
It’s important to point out that GSM operates on the 900-MHz and 1800MHz bands throughout the world, except in the United States, where it operates on the 1900-MHz band. Unless you have a triband phone, this difference introduces incompatibility among mobile devices used in the United States versus other locals. TDMA and CDMA are more popular in North America, South America, and other parts of the world, including Central America, South Africa, and Asia Pacific. Although adding limited digital data transfer is a major improvement, one inherent limitation to 2G is its data transfer rate, which is, at top, 9600 bps. Given this limitation, it explains why technology such as WAP is needed for mobile Web applications. WAP was designed to pay attention to bandwidth limitations. Of course, other mobile devices such as palm-size PDAs and handheld computers suffer the same limitations of the wireless data transfer. However, these types of devices have more CPU power, RAM, and secondary storage in comparison to traditional mobile phone solutions.
N OT E A related issue to 2G and 3G/2.5G is that, currently, there are limitations that frequently prevent simultaneous use of voice and TCP/IP on mobile phones. These limitations are imposed by the NOM (Network Operation Mode) of the carrier’s network and the phone’s own hardware. On many systems, you cannot receive incoming phone calls or make outgoing phone calls while you’re on a data call, and vice versa.
3G/2.5G
No one can deny that we live in a world of information overload. To advance mobile computing, one major bottleneck must be resolved: wireless bandwidth limitations. The third generation, or 3G, standards are designed to do just that. Whether the eventual standard used will be UMTS (Universal Mobile Telecommunication System), WCDMA (Wideband Code-Division Multiple Access), or Cdma2000 (Multicarrier Radio Transmission Technology), 3G promises to boost data transmission rate up to 2 Mbps, good enough to keep mobile users in contact at all times and to run most widely used multimedia applications. 3G isn’t expected to reach maturity and be fully deployed until around 2005, so many operators are also looking into a possible middle-of-the-road solution. That’s where 2.5G comes in. The two major contenders for 2.5G are GPRS (General Packet Radio Service) and EDGE (Enhanced Data GSM Environment). A number of GSM operators in the United States and Europe are migrating to GPRS. In addition to promising to boost the transfer rate from 9600 bps to 114 Kbps (or, to be more realistic, up to as high as 28 Kbps to 64 Kbps), another benefit of GPRS is that data is sent in packets similar to TCP/IP, rather than via a circuit switch connection. This means that users can connect in the “alwayson” mode without being charged. Telecom operators can then charge users based on the data transfer size, rather than connection time.
167
Up to 384 Kbps
Up to 2 Mbps
3G
2.5G
EDGE
56 to 114 Kbps (approx.)
UMTS
2.5G
GPRS*
9.6 Kbps
384 Kbps to 2 Mbps (approx.)
2G
CDMA
9.6 Kbps
3G
2G
GSM (uses a variation of TDMA)
9.6 Kbps
WCDMA
2G
D-AMPS (TDMA)
Analog only
1.885 GHz to 2.2 GHz
1.885 GHz to 2.2 GHz
800 MHz and 1.9 GHz
900 MHz, 1.8 GHz, and 1.9 GHz
900 MHz, 1.8 GHz, and 1.9 GHz
800 MHz and 1.9 GHz
900 MHz, 1.8 GHz, and 1.0 GHz
800 MHz and 1,900 MHz
800 MHz to 900 MHz
OPERATES ON BAND(S)
Universal Mobile Telecommunication System
Wideband Code-Division Multiple Access
(Single-Carrier) Radio Transmission Technology
Enhanced Data GSM Environment
General Packet Radio Service
Code Division Multiple Access
Global System for Mobile Communications
Digital-Advanced Mobile Phone System
Advanced Mobile Phone Service
STANDS FOR
Japan (2001), moving into Europe, Asia Pacific, and then United States
Upgrade path for GSM or TDMA (possibly for CDMA in some cases)
North America primarily and other countries
Upgrade path for GSM or TDMA
Upgrade path for GSM or TDMA
North America, Africa, South America, Pacific Asia, and others
Europe and more than 140 other countries. United States supports 1.9-GHz band only
North, South, and Central America
United States and other countries
OPERATES IN
8:58 AM
Up to 144 Kbps (approx.)
1G
AMPS
DATA TRANSFER RATE
5/29/03
1XRTT (first phase of Cdma2000)
GENERATION
STANDARD NAME
168
Table 6.2 Various Wireless Network Standards
09 201006 Ch06.qxd Page 168
Chapter 6
3G 4G
Cdma2000 (3XRTT)
–TBD
20 to 40 Mbps
144 Kbps to 2 Mbps
DATA TRANSFER RATE
Expected to be deployed between 2006-2010
1.885 GHz to 2.2 GHz
OPERATES ON BAND(S) N/A
STANDS FOR
Upgrade path for CDMA
OPERATES IN
5/29/03 8:58 AM
*For more information on GPRS worldwide coverage, go to: www.gsmworld.com/roaming/gsminfo.
GENERATION
STANDARD NAME
Table 6.2 (continued)
09 201006 Ch06.qxd Page 169
Mobile Web Application Platform 169
09 201006 Ch06.qxd
170
5/29/03
8:58 AM
Page 170
Chapter 6
EDGE is another significant player in the 2.5G movement. EDGE enables operators to upgrade their systems through software that promises to boost the data transfer rate up to 384 Kbps. This is a very significant improvement. In fact, some operators planning to upgrade to 3G will still be competing against the bandwidth capability delivered by EDGE. Ready for 4G?
What is the future beyond 3G? It’s 4G, of course. Fourth-generation wireless is expected to follow 3G within a few short years. The major improvement of 4G over 3G communications is, again, increased data transmission rate, reaching between 20 to 40 Mbps. 4G is expected to deliver more advanced versions of the same improvements promised by 3G, including worldwide roaming capability, enhanced multimedia and streaming video, universal access, and portability across all sorts of mobile devices.
Wireless Modem Another important wireless technology is Cellular Digital Packet Data (CDPD), a standard designed to work over AMPS for supporting wireless access to the Internet and other packet-switched networks. Phone carriers and modem producers together offer CDPD-based network connectivity, enabling mobile users to get access to the Internet at the data transfer rate of up to 19.2 Kbps. The Minstrel series of wireless modems is an example of popular and commercially available modems for various PDA devices, including for PalmPilot and Pocket PC.
Wireless LAN and Bluetooth Two technologies that we predict will have success in their own niche in the mobile movement are 802.11 and Bluetooth. Although these technologies are more applicable to WLAN (Wireless Local Area Network), they deserve mention here. 802.11 is a family of specifications for WLAN that includes 802.11, 802.11a, 802.11b, and 802.11g. Rather than explaining the differences among these standards, for the purpose of this discussion, we can generalize that these standards offer a data transfer rate of somewhere between 11 Mbps to 54 Mbps, or more realistically, up to 60 percent of that expected transfer rate. 802.11 can operate within a distance of a few hundred feet. Bluetooth, in contrast, is designed to operate at a much shorter distance (about 50 feet). It is a specification that describes how, using a short-range wireless connection, mobile phones, computers, and PDAs can easily interconnect with each other and with home and business phones and computers. Bluetooth requires that a transceiver chip be installed in each device. Data can
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 171
Mobile Web Application Platform
be exchanged at a rate of 1 Mbps. Users take advantage of Bluetooth technology by using it to interconnect various mobile devices including cellular phones and mobile communicators such as SMS devices, PDAs, portable audio players, and desktop computers. For example, a PDA device can do data synchronization with the desktop wirelessly; and Bluetooth earphones can be used with Bluetooth mobile phones. Given the possibilities of various available wireless connectivity, the main impact on testing will be to take into consideration the different bandwidths that the majority of the target customers will have. In particular, we need to focus on the lowest common denominator to ensure that performance of the mobile Web application under test will meet the customer’s expectations. In addition, we should also take into account the higher probability of losing connection in a wireless network and make sure to develop scenarios to test the handling of those undesirable conditions.
Other Software Development Platforms and Support Infrastructures If your company develops and deploys a wireless solution, similar to building an e-business application with wireless solutions, the application will have additional dependencies on the software development platforms. These platform solutions include Sun’s J2ME (Java 2 Micro Edition) and QUALCOMM’s BREW (Binary Run-time Environment for Wireless). There will also be content management issues to deal with: Specifically, how will you effectively deliver the content to a mobile device? If you have a standard Web application, for example, you will have to massage the content to work on a mobile device. Your company may need a solution specifically designed to deliver in the trimmed-down environments. With respect to backoffice applications such as ERP and front-office applications such as CRM, your application will also need a data-access solution for mobile devices. Finally, the wireless infrastructure will also be needed to complete the mobile application deployment. To accelerate the development and find a cost-effective solution, your company might opt to invest in third-party development platform software, content management client-server software, or data-access client-server solutions; or your company might opt to outsource an ASP (Application Service Provider) solution. While utilizing third-party technologies enables development teams to shorten time-to-delivery, these technologies also cause problems of their own, such as bugs, configuration and incompatibility issues, performance issues, and vendor-centric issues. In either, case, you have to consider that all of these technologies will have an effect on the application under test, as we have experienced in the past in standard Web applications.
171
09 201006 Ch06.qxd
172
5/29/03
8:58 AM
Page 172
Chapter 6
The Device Technology Converging Game: Who Is the Winner? As wireless data transfer rates improve, the mobile device industry players are in hot competition to bring better products and services to the target audience. This includes delivering more power and features to a promised much smaller device. In this race, the lines are blurring among mobile devices such as wireless PDAs, handheld computers, handheld communicators, smart phones, and WAP-based phones. PDA devices are adding built-in wireless connectivity, mobile phone, Web access features, audio and video playing, and recording capabilities. An example is the aforementioned Handspring Treo product line, which is positioning itself as a communicator rather than a PDA. Alternative solutions for adding features include extending the device capabilities through expansion module technologies, such as the Handspring Secure Digital Card, Multimedia Card, or Type II PC Card technology, to offer wireless connectivity, wireless voice phone, wireless video phone, digital camera, and video recording capabilities. From the opposite direction, there are the so-called smart phones—Cellular phone vendors are licensing (or using their own) PDA operating systems, such as Palm OS, Microsoft Windows CE, and EPOC to include in the phone. This is done in an effort to add PDA features and local computing and data storage capabilities. Data synchronization capabilities must be added to support data transferring via a desktop host solution. Certainly, the two-device solution (a phone and a data-centric device connecting to each other and ultimately to the Web) through Bluetooth or IR technology will continue to be a growing path in its own right. Finally, we expect to see more of the Web on mobile appliances, as well as wearable devices, introduced into the market, which might complement or replace some of the existing or emerging product categories. Whatever direction the industry at large, and your company in particular, takes, rest assured that, in the future, you will learn more about these new technologies as they are introduced.
Bibliography and Additional Resources Bibliography Arehart, Charles, Nirmal Chidambaram, Shashikiran Guruprasad, Alex Homer, Ric Howell, Stephan Kasippillai, Rob Machin, Tom Myers, Alexander Nakhimovsky, Luca Passani, Chris Pedley, Richard Taylor, Marco Toschi. Professional WAP. Birmingham, United Kingdom: Wrox Press Inc., 2000.
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 173
Mobile Web Application Platform
Collins, Daniel, and Clint Smith. 3G Wireless Networks. New York: McGrawHill Professional, 2001. Garg, Vijay Kumar. Wireless Network Evolution: 2G to 3G. Upper Saddle River, NJ: Prentice Hall PTR, 2001. Lin, Yi-Bing, and Imrich Chlamtac. Wireless and Mobile Network Architectures. New York: John Wiley & Sons, Inc., 2000. Pogue, David, and Jeff Hawkins. PalmPilot: The Ultimate Guide, 2nd ed. Sebastopol, CA: O’Reilly & Associates, 1999. Rhodes, Neil, and Julie McKeehan. Palm OS Programming: The Developer’s Guide, 2nd ed. Sebastopol, CA: O’Reilly & Associates, 2001.
Additional Resources Compact HTML for Small Information Appliances www.w3.org/TR/1998/NOTE-compactHTML-19980209 CTIA—The Cellular Telecommunications & Internet Association www.wow-com.com Developer Resources Page on pencomputing.com www.pencomputing.com/developer GSM, TDMA, CDMA, & GPRS. What is it? www.wirelessdevnet.com/newswire-less/feb012002.html HDML or WML www.allnetdevices.com/developer/tutorials/2000/06/09/hdml_or.html i-Mode FAQ www.eurotechnology.com/imode/faq.htm i-Mode FAQ for Developers www.mobilemediajapan.com/imodefaq i-Mode Compatible HTML www.nttdocomo.co.jp/english/i/tag/imodetag.html Internet.Com Wireless Page www.internet.com/sections/wireless.html
173
09 201006 Ch06.qxd
174
5/29/03
8:58 AM
Page 174
Chapter 6
mBusiness Magazine (wireless technology magazine and books) www.mbusinessdaily.com Mobile Computing Magazine (news magazine for mobile computing) www.mobilecomputing.com Mobile Information Device Profile (MIDP) http://java.sun.com/products/midp/ Mobile Software Resources (For i-Mode/HTML Development) www.mobilemediajapan.com/resources/software Mobile Technology Resources www.jmobilemediajapan.com/resources/technology mpulse-nooper.com—An article: “Application Testing in the Mobile Space” http://cooltown.hp.com/mpulse/0701-developer.asp Nokia Forum www.forum.nokia.com Online WAP Testing Tool www.wapuseek.com/checkwap.cfm Pen Computing Magazine www.pencomputing.com QACity.Com | Mobile www.qacity.com/Technology/Mobile WAP Devices Metrics www.wapuseek.com/wapdevs.cfm WAP FAQs www.wapuseek.com/wapfaz.cfm WAP Testing Papers www.nccglobal www.nccglobal.com/testing/mi/whitepapers/index.htm
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 175
Mobile Web Application Platform
WAP Testing Tools http://palowireless.com/wap/testtools.asp WAP Tutorials www.palowireless.com/wap/tutorials.asp Web-based WAP Emulator: TTemulator www.winwap.org WinWAP: Mobile Internet Browser for Windows www.winwap.org YoSpaces’s Emulator www.yospace.com
175
09 201006 Ch06.qxd
5/29/03
8:58 AM
Page 176
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 177
CHAPTER
7 Test Planning Fundamentals
Why Read This Chapter? A crucial skill required for the testing of Web applications is the ability to write effective test plans that consider the unique requirements of those Web applications. This skill is also required to write the sample test plan for the sample application. (See Chapter 8, “Sample Application,” and Chapter 9, “Sample Test Plan,” for details.) TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ Test Plans ◆ LogiGear One-Page Test Plan ◆ Testing Considerations ◆ Bibliography
177
10 201006 Ch07.qxd
178
5/29/03
8:58 AM
Page 178
Chapter 7
Introduction This chapter discusses test documentation, including test plan templates and section definitions. It also explains the efficiencies of the LogiGear One-Page Test Plan, details the components of issue and weekly status reports, and lists some helpful testing considerations. Test planning for Web applications is similar to test planning for traditional software applications; that is, careful planning is always critically important to effective structuring and management. Test planning is an evolutionary process that is influenced by numerous factors: development schedules, resource availability, company finances, market pressures, quality risks, and managerial whim. Test planning begins with the gathering and analysis of information. First, the product under test is examined thoroughly. Schedules and goals are considered. Resources are evaluated. Once all associated information has been pulled together, test planning begins. Despite the complex and laborious nature of the test planning process, test teams are not generally given much direction by management. If a companyapproved test-plan template does not exist, test teams are often simply instructed to “come up with a test plan.” The particulars of planning, at least for the first draft of the test plan, are normally left up to the test team.
Test Plans A test plan is a document, or set of documents, that details testing efforts for a project. Well-written test plans are comprehensive and often voluminous in size. They detail such particulars as testing schedules, available test resources, test types, and personnel who will be involved in the testing project. They also clearly describe all intended testing requirements and processes. Test plans often include quite granular detail—sometimes including test cases, expected results, and pass/fail criteria. One of the challenges of test planning is the need for efficiency. It takes time to write these documents. Although some or all of this time might be essential, it is also time that is no longer available for finding and reporting bugs. There is always a trade-off between depth/detail and cost, and in many of the best and most thoughtful test groups, this trade-off is a difficult and uncomfortable one to make. Another challenge of test planning is that it comes so early in the development process that, more than likely, no product has yet been built on which to base planning. Planning, instead, is based on product specifications and requirements documents (if such documents exist, and to whatever extent that
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 179
Test Planning Fundamentals
they are accurate, comprehensive, and up-to-date). As a consequence, planning must be revised as the product develops, often moving in directions that are different from those suggested by original specifications. Assuming that they are read (which often is not the case), test plans support testing by providing structure to test projects and improving communication between team members. They are invaluable in supporting the testing team’s primary responsibility: to find as many bugs as possible. A central element of test planning is the consideration of test types. Although every test project brings with it its own unique circumstances, most test plans include the same basic categories of tests: acceptance tests, functionality tests, unit tests, system tests, configuration tests, and regression tests. Other test types (installation tests, help tests, database tests, usability, security, load, performance, etc.) may be included in test plans, depending on the type of Web application under test. Sometimes, testing groups also need to determine how much automation and which automated testing tools to use. How will test coverage be measured, and which tools will be used? Other tasks that testing groups are often asked to do include designing and implementing defect tracking, configuration management, and build-process ownership. Table 7.1 details when standard test types are normally performed during the software development process. (Refer back to Chapter 3, “Software Testing Basics,” for definitions of these test types.) Note that Release Acceptance Tests (RATs), Functional Acceptance Simple Tests (FASTs), and Task-Oriented Functional Tests (TOFTs) are generally run in each phase of testing. Web systems may require additional test types, such as security, database, and load/stress. The next phase of test planning is laying out the tasks. After all available resources and test types have been considered, it’s possible to begin to piece together a bottom-up schedule that details which tests will be performed and how much time each test will require (later, delegation of tasks to specific personnel should be incorporated into the test plan). A bottom-up schedule is developed by associating tasks with the time needed to complete them—with no regard to product ship date. A top-down schedule, in contrast, begins with the ship date and then details all tasks that must be completed if the ship date is to be met. Negotiations regarding test coverage and risk often involve elements of both top-down and bottom-up scheduling. Test plans must undergo peer management and project management review. Like engineering specs, test plans need to be approved and signed before they are implemented. During a test-plan review, the testing group may need to negotiate with management over required resources, including schedule, equipment, and personnel. Issues of test coverage and risk-based quality or life-critical and 24/7 uptime quality may also come into play. (Refer back to Chapter 1, “Welcome to Web Testing,” for more information on test coverage and risk-based quality.) Ultimately, a test plan will be agreed upon, and testing can begin.
179
10 201006 Ch07.qxd
180
5/29/03
8:58 AM
Page 180
Chapter 7 Table 7.1 Test Types and Their Place in the Software Development Process TIME Begin Alpha Testing Begin Beta Testing Begin Final Testing
Alpha Phase
Beta Phase
Final Phase
TYPES OF TESTS RECOMMENDED TOFT FAST RAT Configuration Compatibility* Boundary Test Stress Installation Test Exploratory Test
TOFT FAST RAT Real-World User Test Exploratory Test Forced-Error Test Full Configuration Compatibility Test Volume Test Stress Test Install/Uninstall Test Performance Test User Interface Regression Documentation
TOFT FAST RAT Install/Uninstall Test Real-World User-Level Test Exploratory Test
* Test one representative from each equivalence class.
Test-Plan Documentation Test-plan documentation should detail all required testing tasks, offer estimates of required resources, and consider process efficiencies. Unless you are creating a test plan with the intention of distributing it to a third party—either to prove that proper testing was performed or to sell it along with software— it is best to keep test plans focused on only those issues that support the effort of finding bugs. Enormous, heavily detailed test plans—unless required by a customer or third-party regulating body—are only valuable insofar as they help you find bugs. The unfortunate reality is that the majority of test plans sit unread on shelves during most of the testing process. This is because they are unwieldy and dense with information that does not support the day-to-day effort of finding bugs. Even if they are read, they are seldom updated as regularly as they should be, to reflect current changes to schedule, delegation of tasks, test coverage, and so on.
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 181
Test Planning Fundamentals
The LogiGear One-Page Test Plan (included later in this chapter) is designed specifically to avoid the troubles that more traditional test plans suffer; onepage test plans are more easily read and updated. When read, test-plan documentation improves communication regarding testing requirements by explaining the testing strategy to all members of the product development team. Documentation is, of course, also valuable in conveying the breadth of a testing job to testing staff and in providing a basis for delegating tasks and supervising work. Documentation generates feedback from testing team members and members of other departments. Debates are often sparked over the contents of test documentation. For example, project managers may insist on different levels of testing from those proposed by the testing group. Therefore, it is always a good idea to make test plans available for review as early in the development process as possible so that managers, programmers, and members of the marketing team can assess risk and priorities before testing begins. Debates are also more fruitful when team members can focus discussions on a clearly laidout test plan that includes specific goals. Issues of test coverage often arise midway through the testing process. Requiring managers to approve and sign test plans (before testing begins) brings managers into the test coverage decision process; moreover, it places responsibility on management to approve any compromises of test coverage that may arise from test-plan negotiations. Accountability is also increased by good test documentation. Clearly defined responsibilities make it easier for both managers and staff to stay focused. Detailed lists of tests that must be performed, along with clearly defined expectations, go a long way toward ensuring that all foreseeable areas of risk are addressed. Proper test documentation requires a systematic analysis of the Web system under test. Your understanding of the interdependencies of a system’s components must be detailed and thorough if test planning is to be effective. As a test project is analyzed, a comprehensive list of program features should be compiled. It is common for a feature list to detail the complete set of product features, all possible menu selections, and all branching options. It is a good idea to begin writing a feature list as early in the test-planning phase as possible. Test plans take into consideration many of the risks and contingencies that are involved in the scheduling of software development projects. For example, product documentation testing (e.g., online help, printed manuals) cannot be completed until the documentation itself nears completion. Documentation, however, cannot be in its final phase until after the user interface (UI) has been frozen. The UI, in turn, cannot be frozen until at some point in beta testing, when functional errors affecting the UI have been fixed. Another example of testing interdependency includes not being able to execute performance testing until all debugging code has been removed.
181
10 201006 Ch07.qxd
182
5/29/03
8:58 AM
Page 182
Chapter 7
Compiling a list of features that are not to be tested will also be of value. Such a list sometimes smokes out resistance within the product team that might not otherwise have been voiced until midway through the testing process. It also clearly marks what you believe to be out of scope.
N OT E For more in-depth information regarding test planning, refer to Testing Computer Software by Cem Kaner, Jack Falk, Hung Q. Nguyen (John Wiley & Sons, Inc., 1999).
Test-Plan Templates One effective means of saving time and ensuring thoroughness in test-plan documentation is to work from a test-plan template. A test-plan template is, essentially, a fill-in-the-blank test plan into which information that is specific to the system under test is entered. Because they are generic and comprehensive, test-plan templates force test team members to consider questions that might not otherwise be considered at the beginning of a test project. They prompt testers to consider numerous test types—many of which may not be appropriate for the test project—in addition to pertinent logistical issues, such as which test tools will be required and where testing will take place. Test templates can also impose a structure on planning, encouraging detailed specifications on exactly which components will be tested, who will test them, and how testing will proceed. Appendix A contains the complete LogiGear Test Plan Template. After looking it over, review some of the many other test templates available. A good place to begin looking for a test-plan template is the planning section of the LogiGear Test Resource Web site (www.qacity.com). A standard test-plan template used by the software testing industry is the ANSI/IEEE Standard 829-1983 for Software Test Documentation. It defines document types that may be included in test documentation, including test cases, feature lists, and platform matrices. It also defines the components that the IEEE believes should be included in a standard test plan; so among other uses, it serves as a test-plan template. (For information regarding the ANSI/IEEE Standard 829-1983, visit www.computer.org, or phone (202) 371-0101.)
Test-Plan Section Definitions The following lists define a number of standard test-plan sections that are appropriate for most test projects.
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 183
Test Planning Fundamentals
OVERVIEW SECTION Test-plan identifier. Unique alphanumeric name for the test plan. (See Appendix A, “LogiGear Test Plan Template,” for details.) Introduction. Discussion of the overall purpose of the project. References all related product specifications and requirements documents. Objective. Goals of the project, taking quality, scheduling constraints, and cost factors into consideration. Approach. The overall testing strategy. Answer: Who will conduct testing? Which tools will be utilized? Which scheduling issues must be considered? Which feature groups will be tested? TESTING SYNOPSIS SECTION Test items. Lists every feature and function of the product. References specifications and product manuals for further detail on features. Includes descriptions of all software application, software collateral, and publishing items. Features to be tested. Cross-references features and functions that are to be tested with specific test design specifications and required testing environments. Features not to be tested. Features of the product that will not undergo testing. May include third-party items and collateral. System requirements. Specifications on hardware and software requirements of the application under test: computer type, memory, hard-disk size, display type, operating system, peripheral, and drive type. Entrance/exit. Application-specific: description of the application’s working environment; how to launch and quit the application. Process-specific: description of criteria required for entering and exiting testing phases, such as alpha and beta testing. Standard/reference. List of any standards or references used in the creation of the test plan. Types of tests. Tests to be executed. May include acceptance tests, featurelevel tests, system-level tests, regression tests, configuration and compatibility tests, documentation tests, online help tests, utilities and collateral tests, and install/uninstall tests. Test deliverables. List of test materials developed by the test group during the test cycles that are to be delivered before the completion of the project. Includes the test plan itself, the bug-tracking system, and an end-ofcycle or final release report.
183
10 201006 Ch07.qxd
184
5/29/03
8:58 AM
Page 184
Chapter 7
TEST PROJECT MANAGEMENT SECTION The product team. List of product team members and their roles. Testing responsibilities. Responsibilities of all personnel associated with the testing project. Testing tasks. Testing tasks to be executed: The order in which tasks will be performed, who will perform the tasks, and dependencies. Development plan and schedule. Development milestone definitions and criteria—detailing what the development group will deliver to testing, and when. Test schedule and resource. Dates by which testing resources will be required. Estimates on amount of tester hours and personnel required to complete project. Training needs. Personnel and training requirements. Special skills that will be required and number of personnel that may need to be trained. Environmental needs. Hardware, software, facility, and tool requirements of testing staff. Integration plan. How the integration plan fits into overall testing strategy. Test suspension and resumption. Possible problems or test failures that justify the suspension of testing. Basis for allowing testing to resume. Test completion criteria. Criteria that will be used to determine the completion of testing. Issue-tracking process. Description of the process, the issue-tracking database, bug severity definitions, issue report formats (see the “Issue Reports” section in this chapter for an example). Status tracking and reporting. How status reports will be communicated to the development team, and what the content of status reports will be (see the “Weekly Status Reports” section in this chapter for an example). Risks and contingencies. All risks and contingencies, including deliverables, tools, and assistance from other groups—even those risks and contingencies that are detailed in other parts of the test plan. Approval process. Test-plan approval and final release approval.
LogiGear One-Page Test Plan It is often a challenge for testing groups to communicate their needs to other members of the software development team. The myriad test types, the testing sequence, and scheduling considerations can be overwhelming when not
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 185
Test Planning Fundamentals
organized into a comprehensible plan that others can read at a glance. The LogiGear One-Page Test Plan is a distillation of test types, test coverage, and resource requirements that meets this need. The LogiGear One-Page Test Plan is task-oriented. It lists only testing tasks, because some members of the product team may not be interested in “testing approach,” “features not to be tested,” and so on. They just want to know what is going to be tested and when. Because one-page test plans are so easy to reference, if they are adequate for your process, they are less likely to be disregarded by impatient team members. The LogiGear One-Page Test Plan does not require additional work. It is simply a distillation of the standard test-plan effort into an easily digestible format. The LogiGear One-Page Test Plan is effective because it details the testing tasks that a testing team should complete, how many times the tasks should be performed, the amount of time each test task may require, and even a general idea of when the tasks should be performed during the software development process. The LogiGear One-Page Test Plan is easy to reference and read. Twentypage test plans are regularly ignored throughout projects, and 100-page test plans are rarely read at all. One-page test plans, on the other hand, are straightforward and can easily be used as negotiating tools when it comes time to discuss testing time and coverage—the usual question being, “How much testing time can be cut?” The test team can point to test tasks listed on a one-page test plan and ask, “Are we prepared to accept the risk of not performing some of these tests to their described coverage?”
Developing a One-Page Test Plan The process of completing a one-page test plan is described in the following steps:
Step 1: Test Task Definition Review the standard test types that are listed in Chapter 3 and in Table 7.1. Select the test types that are required for the project. Base decisions on the unique functionality of the system under test. Discussions with developers, analysis of system components, and an understanding of test types are required to accurately determine which test types are needed.
Step 2: Task Completion Time Calculate the time required to perform the tests. The most difficult aspect of putting together a test plan is estimating the time required to complete a test suite. With new testers, or with tests that are new to experienced testers, the
185
10 201006 Ch07.qxd
186
5/29/03
8:58 AM
Page 186
Chapter 7
time estimation process involves a lot of guesswork. The most common strategy is divide and conquer; that is, break down the tasks into smaller subtasks. Smaller subtasks are easier to estimate. You can then sum up from those. As you gain experience, you miss fewer tasks and you gain a sense of the percentage of tasks that you typically miss so you can add an n percent for contingency or missing-tasks correction. Informed estimates may also be arrived at if testing tasks are similar to those of a past project. If time records of similar past testing are not available, estimates may be unrealistic. One solution is to update the test plan after an initial series of tests has been completed. A 20 percent contingency or missing-tasks correction is included in this example. As testing progresses, if this contingency does not cover the inevitable changes in your project’s schedule, the task completion time will need to be renegotiated.
Step 3: Placing the Test Tasks into Context Once the task list has been developed and test times have been estimated, place the tasks into the context of the project. The development team will need to supply a build schedule. Determine how many times tests will be run during development. For example, documentation testing may be performed only once, or it may be reviewed once in a preliminary phase and then again after all edits are complete. A complete cycle of functionality tests may be executed once, or possibly twice, per development phase. Acceptance tests are run on every build. Often, a full bug regression occurs only once per phase, though partial regression tests may happen with each build.
Step 4: Table Completion Finally, multiply the numbers across the spreadsheet. Total the hours by development phase for an estimate of required test time for the project. Add time for management, including test-plan writing/updating, test-case creation, bug database management, staff training, and other tasks that are needed for the test team and for completion of the project.
Step 5: Resource Estimation Take the total number of hours required for the alpha phase, divide that by the total number of weeks in the alpha phase, and then divide that by 30 hours per week. That gives you the number of testers needed for that week. For example, if you need total testing hours for an alpha of 120, a four-week alpha phase,
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 187
Test Planning Fundamentals
and testers have a third-hour testing week, your project requires only one tester [(120 ÷ 4) ÷ 30 = 1]. Apply this same process to arrive at estimates for the beta phase and project management. Note that, here, only a 30-hour testing week was used for a full-time tester because experience has shown that the other 10 (overhead) hours are essentially used for meeting, training, defect tracking, researching, special projects, and so on.
Using the LogiGear One-Page Test Plan The LogiGear One-Page Test Plan can be invaluable in negotiating testing resource and testing time requirements with members of the product team. Figure 7.1 provides an example of the LogiGear One-Page Test Plan. Descriptions of each of the tests are included in Chapter 3.
Number of Cycles
Milestone Type of Test
Hours per Cycle
Estimated Hours
Alpha Total:
Beta
Total:
#
Total:
#
Final Testing Project Management Test Planning & Test Case Design Training Test Automation Total:
#
PROJECT TOTAL DAYS
XX
PERSON WEEKS
XX
20% Contingency (wks)
XX
Total person weeks
XX
Testers for Alpha
XX
Testers for Beta
XX
Testers for Final
XX
Project Management
XX
Figure 7.1 LogiGear One-Page Test Plan.
187
10 201006 Ch07.qxd
188
5/29/03
8:58 AM
Page 188
Chapter 7
Testing Considerations As part of the test planning process, you should consider how the bug reporting/resolution cycle will be managed and the procedure for status reporting. In addition, you should give some thought to how to manage milestone criteria, as well as whether to implement an automated testing program. This section touches on those issues.
Issue Reports An issue report, or test incident report, is submitted whenever a problem is uncovered during testing. Figure 7.2 shows an example of an online issue report that is generated by the sample application. The following list details the entries that may be included in a complete issue report: ISSUE REPORT FIELDS Project. A project may be anything from a complex client-server system with multiple components and builds to a simple 10-page user’s guide. Build. Builds are versions or redesigns of a project that is in development. A given project may undergo numerous revisions, or builds, before it is released to the public. Module. Modules are parts, components, units, or areas that comprise a given project. Modules are often thought of as units of software code. Configuration. Configuration testing involves checking an application’s compatibility with many possible configurations of hardware. Altering any aspect of hardware during testing creates a new testing configuration. Uploading attachments. Attachments are uploaded along with issue reports to assist QA and developer groups in identifying and re-creating reported issues. Attachments may include keystroke captures or macros that generate an issue, a file from a program, a memory dump, a corrupted file on which an issue report is predicated, or a memo describing the significance of an issue. Error types. The category of error into which an issue report falls (e.g., software incompatibility, UI, etc.). Keyword. Keywords are an attribute type that can be associated with issue reports to clarify and categorize an issue’s exact nature. Keywords are useful for sorting reports by specific criteria to isolate trends or patterns within a report set.
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 189
Test Planning Fundamentals
Figure 7.2 Online issue report form.
Reproducible. Specifies whether a reported issue can be recreated: Yes, No, with Intermittent success, or Unconfirmed. Severity. Specifies the degree of seriousness that an issue represents to users. For example, a typo found deep within an online help system might be labeled with a severity of low, and a crash issue might qualify for a severity of high. Frequency. Frequency, or how often an issue exhibits itself, is influenced by three factors: 1. How easily the issue can be reached. 2. How frequently the feature that the issue resides in is used. 3. How often the problem is exhibited. Priority. An evaluation of an issue’s severity and frequency ratings. An issue that exhibits itself frequently and is of a high severity will naturally receive a higher-priority rating than an issue that seldom exhibits itself and is only of mild annoyance when it does appear. Summary. A brief summary statement that concisely sums up the nature of an issue. A summary statement should convey three elements: (1) symptoms, (2) actions required to generate the issue, and (3) operating conditions involved in generating the issue.
189
10 201006 Ch07.qxd
190
5/29/03
8:58 AM
Page 190
Chapter 7
Steps. Describes the actions that must be performed to re-create the issue. Notes and comments. Additional pertinent information related to the bug that has not been entered elsewhere in the report. Difficult-to-resolve bugs may develop long, threaded discussions consisting of comments from developers, project managers, QA testers, and writers. Assigned. Individuals who are accountable for addressing an issue. Milestone stopper. An optional bug report attribute that is designed to prevent projects from achieving future development milestones. By associating critical bugs with production milestones, milestone stoppers act as independent criteria by which to measure progress.
Weekly Status Reports At the conclusion of each week during testing, the testing team should compile a status report. The sections that a status report normally includes follow. Weekly status reports can take on critical importance because they are often the only place where software changes are tracked. They detail such facts as prerequisite materials not arriving on time, icons not loading onto desktops properly, and required documentation changes. Once archived, they, in effect, document the software development process. Consideration must be given to what information will be included in weekly status reports and who will receive the reports. Just as test plans need to be negotiated at the beginning of a project, so do weekly status reports. The manner in which risks will be communicated to the development team needs to be carefully considered because information detailed in these reports can be used against people to negative effect. Possibly, milestone status reports only should be disseminated to the entire product team, leaving weekly status reports to be viewed only by a select group of managers, testers, and developers. (See Appendix B, “Weekly Status Report Template.”) Following are descriptions of sections that are typically included in weekly status reports: TESTING PROJECT MANAGEMENT Project schedule. Details testing and development milestones and deliverables. Progress and changes since last week. Tests that have been run and new bugs that have been discovered in the past week. Urgent items. Issues that require immediate attention. Issue bin. Issues that must be addressed in the coming weeks. To-do tasks by next report. Tasks that must be completed in the upcoming week.
10 201006 Ch07.qxd
5/29/03
8:58 AM
Page 191
Test Planning Fundamentals
PROBLEM REPORT STATUS Bug report tabulation. Totals of open and closed bugs; explanation of how totals have changed in past week. Summary list of open bugs. Summary lines from issue reports associated with open bugs. TREND ANALYSIS REPORT Stability trend chart. Graph that illustrates the stability of a product over time. Quality trend chart. Graph that illustrates the quality of a product over time.
N OT E Numerous other document types may be included in test-plan documentation. For definitions of other test documentation types (including test-design, test-procedure, and test-case specifications; test transmittal reports; and test logs), refer to Testing Computer Software by Kaner, et al. (1999).
Automated Testing The sample One-Page Test Plan given in Chapter 9 can be analyzed to uncover areas that may be well suited to automated testing. Considerations regarding staffing, management expectations, costs, code stability, UI/functionality changes, and test hardware resources should be factored into all automated testing discussions. Table 7.2 categorizes the testing tasks called for in the sample one-page test plan by their potential adaptability to automated testing; further evaluation would be required to definitively determine whether these testing tasks are well suited to automation. Table 7.2
Test Types Suited for Automation Testing
IDEALLY SUITED
NOT SUITABLE
RAT
Documentation
FAST
Boundary
Performance, load, and stress
Installation
Metrics/charting
Most functionality
Regression
Exploratory
Database population
Import utility
Sample file generation
Browser compatibility Forced-error
191
10 201006 Ch07.qxd
192
5/29/03
8:58 AM
Page 192
Chapter 7
When evaluating test automation, do the following: ■■
Look for the tests that take the most time.
■■
Look for tests that could otherwise not be run (e.g., server tests).
■■
Look for application components that are stable early in development.
N OT E For more information on automated test planning, see Integrated Test Design and Automation: Using the Testframe Method, 1st ed., by Hans Buwalda, Dennis Janssen, Iris Pinkster, and Paul A. Watters (Boston: Addison-Wesley, 2001).
Milestone Criteria and Milestone Tests Milestone criteria and milestone tests should be agreed upon and measurable (for example, you might decide that alpha testing will not begin until all code is testable and installable and all UI screens are complete—even if they contain errors). Such criteria can be used to verify whether code should be accepted or rejected when it is submitted for milestone testing. Milestone criteria and accompanying tests should be developed for all milestones, including completion of testing, entrance, and exit. Ideally, these tests will be developed by the test team and approved by the development team; this approach may reduce friction later in the development project.
Bibliography Kaner, Cem, Jack Falk, Hung Q. Nguyen. Testing Computer Software, 2nd ed. New York: Wiley, 1999. Kaner, Cem, James Bach, and Bret Pettichord. Lessons Learned in Software Training, 1st ed., New York: John Wiley & Sons, Inc., 2001. LogiGear Corporation. QA Training Handbook: Lead Software Test Project with Confidence. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Testing Web Applications. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Creating Excellent Test Project Documentation. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Testing Windows Desktop and Server-Based Applications. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Testing Computer Software. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Test Automation Planning and Management. Foster City, CA: LogiGear Corporation, 2003.
11 201006 Ch08.qxd
5/29/03
8:58 AM
Page 193
CHAPTER
8 Sample Application
Why Read This Chapter? Some of the testing concepts covered in this book may seem abstract until they are applied in practice to an actual Web application. By seeing how the features of the sample application are accounted for in the sample test plan (see Chapter 9, “Sample Test Plan,” for details) readers can gain insights into effective Web application test planning. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ Application Description ◆ Technical Overview ◆ System Requirements ◆ Functionality of the Sample Application ◆ Bibliography
193
11 201006 Ch08.qxd
194
5/29/03
8:58 AM
Page 194
Chapter 8
Introduction This chapter details the features and technologies that are associated with the sample application, including system overview and application functionality. The sample application, called TRACKGEAR, is helpful in illustrating test planning issues that relate to browser-based Web systems; it places into context many of the issues that are raised in upcoming chapters. TRACKGEAR is a Web-based defect-tracking system produced by LogiGear Corporation. Throughout the book you will see TRACKGEAR used to exemplify various topics. These examples are both in the chapter discussions as well as in boxed text. In Chapter 9, it serves as a baseline from which a high-level test plan is developed.
N OT E At the time of this writing, TRACKGEAR 2.0 had been released. This version offers many new features and improvements over version 1.0. For the latest information on this product, please visit www.logigear.com.
Application Description The sample application, TRACKGEAR, is a problem-tracking system designed for software development teams. It is used to manage the processing and reporting of change requests and defects during software development. The sample application allows authorized Web users, regardless of their hardware platform, to log in to a central database over the Internet to remotely create and work with defect reports, exchange ideas, and delegate responsibilities. All software development team members (project management, marketing, support, QA, and developers) can use the sample application as their primary communications tool. The sample application offers a relatively complex system from which to explore test planning. TRACKGEAR supports both administrator and user functionality. To use it requires a database server, a Web server, and an application server. The sample application’s features include: ■■
Defect tracking via the Internet, an intranet, or extranet
■■
Customizable workflow that enforces accountability among team members
■■
Meaningful color metrics (charts, graphs, and tables)
■■
E-mail notification that alerts team members when defects have changed or require their attention
11 201006 Ch08.qxd
5/29/03
8:58 AM
Page 195
Sample Application
Technical Overview Following are some key technical issues that relate directly to the testing of the sample application: ■■
The application server should be installed on the same physical hardware box as the Web server. Such a configuration eliminates potential performance issues that may result from the application accessing the Web server on a separate box. Figure 8.1 shows the recommended configuration of a system.
■■
The sample application uses Active Server Page (ASP) technology (refer back to Chapter 5, “Web Application Components,” for details about ASP). Web servers process ASP scripts, based on user requests, before sending customized pages back to the user. The ASP scripts are similar to server-side and includes Common Gateway Interface (CGI) scripts, in that they run on the Web server rather than on the client-side. The ASP scripts do not involve a client-side install. This thin-client model involves the browser sending requests to the Web server, where ASP computes and parses requests for the application, database server, and Web server.
Client
Client Ethernet
PHYSICAL SERVER
Application Server
Web Server
Physical Server
Database Server
Figure 8.1 Recommended system configuration.
195
11 201006 Ch08.qxd
196
5/29/03
8:58 AM
Page 196
Chapter 8 ■■
The CGI scripts are not used in the sample application.
■■
The database activities (queries and stored procedures) are supported via Microsoft SQL 7 or higher.
■■
A single Java applet runs on the client browser to display defect metrics (charts and graphics). Only fourth-generation browsers (4.0 or higher) are supported by the sample application.
■■
Both the Web server and the application server must utilize Microsoft technology (IIS, NT, etc.).
System Requirements The hardware and software requirements of the sample application are as follows: SERVER REQUIREMENTS ■■
Computer. PC with a Pentium processor (Pentium II or higher recommended)
■■
Memory. 128Mb (256 recommended)
■■
Disk space. 100Mb for the server application and 200Mb for the database
■■
Operating system. Microsoft Windows NT Server 4.0 with most recent service pack, or Windows 200 Server Service Pack 2
■■
Web server software. Microsoft Internet Information Server (IIS) 4.0 or higher
■■
SQL server software. Microsoft SQL Server 7.0 with Service Pack 2
■■
Microsoft Internet Explorer. 5.5 Service Pack 2 or higher (installed on the server)
■■
Microsoft Visual SourceSafe. 6.0 or higher CLIENT REQUIREMENTS
■■
Active LAN or Internet connection
■■
Netscape Navigator 4.7 or higher on Windows-based PCs only
■■
Windows PC-based Microsoft Internet Explorer 5.5 SP2 or higher
Functionality of the Sample Application The material in this section helps detail the functionality of the sample application.
11 201006 Ch08.qxd
5/29/03
8:58 AM
Page 197
Sample Application
Installing the Sample Application The sample application utilizes a standard InstallShield-based installation program that administrators (or IS personnel) must run to set up the databases that are required by the application. This installation wizard automates the software installation and database configuration process, allowing administrators to: identify preexisting system components (Web server, IIS server, physical hardware boxes, etc.), determine where new components should be installed, and define how much disk space to allocate for databases.
Getting Started The sample application allows users to define workflow processes that are customized for their organization’s defect-tracking needs. Workflow dictates, among other things, who has the privilege to assign resolutions (i.e., defect states) and who is responsible for addressing defect-related concerns. The sample application allows administrators to hardwire such resolution management processes and to enforce accountability. User, group, division, and project assignments dictate the screen layouts and functionality that administrators and different user types can access. The administrator of the application has access to administrator-level functions, such as user setup, project setup, and database setup, in addition to all standard user functionality, including report querying, defect report submission, and metrics generation.
Division Databases The sample application acts as an information hub, controlling data flow and partitioning defect-tracking data. A company may use as many divisionspecific databases as it wishes. Some information will be shared globally—for example, the application itself. Other information, including reports and functional groups, will be relevant only to specific projects or divisions, and therefore will not be shared globally across division databases.
Importing Report Data The sample application works with an import utility (part of MS SQL Server) that allows administrators to import existing databases. Specifically, the program allows the import of comma-separated values (CSV) files. These CSV files can be exported from other database programs, such as Microsoft Access, Excel, and Oracle. In order for the sample application to properly process imported data, it is important that MS SQL’s guidelines be adhered to when creating the CSV files.
197
11 201006 Ch08.qxd
198
5/29/03
8:58 AM
Page 198
Chapter 8
System Setup Many of the sample application’s attributes can be customized. Customizable system attributes include the following: ■■
Keywords
■■
Error types
■■
Resolutions
■■
Severity
■■
Phases
■■
Milestone stoppers
■■
Frequency
■■
Priority
■■
Workflow (the method by which reports are routed)
Project Setup The key components of every project are project name, project members, project modules, project builds, and optional e-mail notification.
E-Mail Notification The sample application utilizes e-mail to notify and inform individuals of their responsibilities regarding defects that are tracked. E-mail notification settings are flexible and can be customized for each project. For example, one project team might require notification for all defects that could prevent their product from going beta. This team’s e-mail notification settings could then be set up to alert them only when a received defect has a milestone-stopper value of beta. Likewise, a team whose product is nearing release date could choose to have hourly summaries of every defect report in the system sent to them. The sample application uses the Simple Mail Transfer Protocol (SMTP) to deliver notifications (most popular e-mail clients are compatible: Eudora, Microsoft Exchange, Microsoft Outlook Express, and others).
Submitting Defect Reports Users of the sample application must go to the report screen to submit new defect reports (Figure 8.2). The report screen includes fields for recording relevant defect-tracking information. To get to the report screen, users click the New button on the navigation bar.
11 201006 Ch08.qxd
5/29/03
8:58 AM
Page 199
Sample Application
Figure 8.2 Sample application report screen.
Generating Metrics The sample application includes a third-party Java applet that allows users to generate metrics (charts, graphs, and tables of information) to gain global perspective over defect reports. Project managers, developers, and softwarequality engineers in particular can gain insight into defect-fixing trends, personnel workload, and process efficiency by viewing trend and distribution metrics. The sample application generates two types of metrics: (1) distribution metrics and (2) trend metrics. Figure 8.3 shows the distribution metrics setup screen. Figure 8.4 shows a typical distribution metric. Figure 8.5 shows the trend metrics setup screen. Figure 8.6 shows a typical trend metric.
Figure 8.3 Distribution metrics setup screen.
199
11 201006 Ch08.qxd
200
5/29/03
8:58 AM
Page 200
Chapter 8
Figure 8.4 Distribution metrics example.
Figure 8.5 Trend metrics setup screen.
Documentation Documentation for the sample application comes in the following three forms: 1. Administrator’s guide. A printed manual that provides administrators with the information they need to set up and manage the sample application. 2. User’s guide. A printable Adobe Acrobat Reader .pdf manual that provides software testers and product team members with the information they need to submit reports, find reports, and advance workflow. 3. Online help. A context-sensitive help system that resides within the sample application. The help system is accessible via the Help button on the navigation bar.
11 201006 Ch08.qxd
5/29/03
8:58 AM
Page 201
Sample Application
Figure 8.6 Trend metrics example.
Bibliography LogiGear Corporation. QA Training Handbook: Testing Web Applications. Foster City, CA: LogiGear Corporation, 2003. ——— TRACKGEAR Administrator Guide. Foster City, CA: LogiGear Corporation, 2001.
201
11 201006 Ch08.qxd
5/29/03
8:58 AM
Page 202
12 201006 Ch09.qxd
5/29/03
8:59 AM
Page 203
CHAPTER
9 Sample Test Plan
Why Read This Chapter? In this chapter we take the knowledge gained so far in “Software Testing Basics” (Chapter 3) and “Test Planning Fundamentals” (Chapter 7) and apply it to the “Sample Application” (Chapter 8). In this chapter we will use TRACKGEAR, to gain test planning experience as it applies to Web applications. Therefore, it is recommended that you read both Chapters 7 and 8 before proceeding with this chapter. The test types listed in this chapter are explored in more detail in Part Three. The sample application is also referenced throughout upcoming chapters. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ Gathering Information ◆ Sample One-Page Test Plan ◆ Bibliography
203
12 201006 Ch09.qxd
204
5/29/03
8:59 AM
Page 204
Chapter 9
Introduction This chapter discusses the test types that are appropriate for the sample application. It includes both a test schedule and a one-page test plan that are designed for the sample application.
N OT E The sample test plan is high level by design. A complete test plan for the sample application is not feasible within the constraints of this book.
The information conveyed in Chapter 8 serves as a technical baseline for the test planning purposes of this chapter. As far as planning for other projects, getting involved early in the development process and discovering reliable sources of information is the best way to gather required technical data. Product prototypes, page mock-ups, preliminary documentation, specifications, and any marketing requests should be evaluated; such information, combined with experience and input from application developers, comprises the best means of determining required testing. Input from the project team should focus the test-plan effort on potential problem areas within the system under test. Preliminary project schedules and an estimated number of builds should be considered in the development of any testing schedule. With basic QA knowledge, the information about Web testing conveyed in this book, input from the development team, and an understanding of product functionality, a test planner can confidently develop a list of test types for the system under test (refer back to Table 7.1 for details on test scheduling). Once a list of test types has been developed, staffing needs can be evaluated by considering the number of hours and types of skills that will be required of the testing team. Keep in mind, however, that required tester hours and skills will undoubtedly fluctuate as development progresses. Estimates of testing hours required for testing the sample project are detailed later in this chapter.
Gathering Information The information-gathering process consists of four steps: (1) Establishing testing-task definitions, (2) estimating time required to complete the testing tasks, (3) entering the information into the project plan, and (4) calculating the overall resource requirements.
12 201006 Ch09.qxd
5/29/03
8:59 AM
Page 205
Sample Test Plan
Step 1: Testing-Task Definitions for the Sample Application Step 1 in the one-page test planning process involves assembling a list of tasks for the project at hand. First, define the test types. The basic tests for Web applications are acceptance (both release acceptance test (RAT) and functional acceptance simple test (FAST)), functionality (task-oriented functional test (TOFT)), installation, user interface (UI), regression, forced-error, configuration and compatibility, server, security, documentation, and exploratory. By reviewing the product description detailed in Chapter 8, you can see a need for specific test types that are not included in the preceding list of basic test types. For example, tests should be developed that test the functionality of the databases, data import utility, e-mail notification, and third-party Java applet (metrics charting). The screenshots indicate functionality that should be tested. Some security features that should be tested are also mentioned (login/logout, views, and user permissions). By reviewing the product’s system requirements, you can also glean information about test platforms, possible configuration tests, and other technologies that will require testing: Java applets, Microsoft NT (required), and Active Server Page (ASP), rather than Common Gateway Interface (CGI). The general descriptions given in Chapter 8 do not provide enough information to help you develop an informed testing schedule and list of testing tasks. Much more detail than can be conveyed in this book is required to make such projections. For example, information regarding the number of error messages (and their completion dates) would be required, as would details of the installation process. Complete product descriptions, specifications, and marketing requirements are often used as a starting point from which you can begin to seek out the specific technical information that is required to generate test cases.
Step 2: Task Completion Time The test times listed in Table 9.1 reflect the actual testing of the sample application. These test times were derived based on input from the test team. Table 9.1 Task Completion Time TEST TYPE
FUNCTIONAL AREA
TIME ESTIMATE
RAT
30 minutes for each build
FAST
2 hours for each build
NOTES
(continued)
205
12 201006 Ch09.qxd
206
5/29/03
8:59 AM
Page 206
Chapter 9 Table 9.1 (continued) TEST TYPE
FUNCTIONAL AREA
TIME ESTIMATE
NOTES
TOFT
Admin Functionality User setup Project setup System setup Division setup
80 hours for a complete run
These tests represent the majority of testing that must be performed. The entire suite of TOFT tests should be run once during alpha testing, twice during beta testing, and once during final testing. Testing should be segmented as coding is completed and as bugs are fixed.
40 hours
Test functionality, not compatibility. These tests should be performed once at the end of alpha testing, once during beta testing, once during beta testing when the known installer bugs have been closed, and once again during final testing. Often, installers are not ready to be tested until well into alpha testing, or even at the beginning of the beta phase.
16 hours
CSV test data is required.
20 hours
Sample input data is required for the metrics function to generate charts.
User Functionality Submit new report Easy find Quick find Form find Custom find Configuration profiles Preferences Metrics Miscellaneous Upload attachments Password Editing reports Views Tabular layouts Installation
Full installation Uninstaller Database initialization Division creation
Data import utility Third-party functionality testing
Metrics/chartgeneration feature
12 201006 Ch09.qxd
5/29/03
8:59 AM
Page 207
Sample Test Plan Table 9.1 (continued) TEST TYPE
FUNCTIONAL AREA
Exploratory
TIME ESTIMATE
NOTES
16 hours per build
These are unstructured tests.
User interface Every screen Regression
Tested while testing functionality. 4 hours
Test suites are built as errors are uncovered.
Forced-error
Confirm all documented error messages
20 hours
Run suite twice. Can only be performed after all messages have been coded. There are 50 error messages in the sample application.
Configuration and compatibility
Browser Settings Cookies Security settings Java Preferences
80 hours
Quick-look tests must be developed. A matrix of browsers, operating systems, and hardwareequivalent classes must be developed.
Browser Types for Macintosh, Windows, and UNIX Netscape Navigator Internet Explorer Browser Functions Back Reload Print Cache settings Server installation Compatibility E-mail notification Server
Performance, load, and stress tests
100 hours
Documentation
Printed manual Online help system Downloadable user guide (PDF file)
80 hours
Y2K and boundary testing
Functionality and content.
Test cases included in functionality tests (TOFT). (continued)
As part of evaluating tasks for completion time, you should evaluate resources such as hardware/software and personnel availability. Some test types require unique resources, tools, particular skill sets, assistance from outside groups, and special planning. Such test types include: ■■
Configuration and compatibility testing. Configuration and compatibility testing require a significant amount of computer hardware and software. Because the cost of outfitting a complete test lab exceeds the financial means of many companies, outsourcing solutions are often considered. See Chapter 17, “Configuration and Compatibility Tests,” for more information.
■■
Automated testing. Automated testing packages (such as Segue SilkTest and Mercury Interactive’s WinRunner) are valuable tools that can, when implemented correctly, save testing time and other resources and ensure tester enthusiasm. See Chapter 21, “Web Testing Tools,” for information about available automated testing tools.
■■
Milestone tests. Milestone tests are performed prior to each development milestone. They need to be developed, usually from TOFT tests, and scheduled according to the milestone plan.
■■
Special functionality tests (TOFT). In addition to the specified functionality of the application, SMTP tests (e-mail notification) are also included in the TOFT suite. These tests may require assistance from other groups or special skill sets.
■■
Web- and client-server-specific tests. Performance, load, and stress tests, in addition to security and database tests, normally require specialized tools and skills.
All required tests should be identified as early in the development process as possible so that resource needs for tools, staffing, and outsourcing can be evaluated.
12 201006 Ch09.qxd
5/29/03
8:59 AM
Page 209
Sample Test Plan
Step 3: Placing Test Tasks into the Project Plan For the purposes of the sample test plan, a development schedule of 20 calendar weeks has been assumed. Testable code is expected early in July. According to the development team, there will be one build per week. PRELIMINARY BUILD SCHEDULE Alpha
12 weeks
Beta
6 weeks
Final
2 weeks
Again referring back to Table 7.1, you can see which test phases are appropriate for each test type. Table 9.2 delineates the development phases and test panning. (Test types from this table are examined in detail in the upcoming chapters of Part Three.) WHERE TO FIND MORE INFORMATION ■■
For information about RAT, FAST, TOFT, regression, and forced-error tests, please see Chapter 11, “Functional Tests.”
■■
For information about configuration and compatibility tests, please see Chapter 17, “Configuration and Compatibility Tests.”
Table 9.2 Development Phases and Test Planning 7/12/2002
TIME LINE
11/26/2002
7/12/2002 TWELVE WEEKS = 60 BUSINESS DAYS ALPHA PHASE
10/04/2002 SIX WEEKS = 30 BUSINESS DAYS BETA PHASE
11/15/2002 TWO WEEKS = 10 BUSINESS DAYS FINAL PHASE SHIP
Types of Tests to Be Executed RAT FAST TOFT (User and Admin.) Configuration and compatibility Install Exploratory
RAT FAST TOFT (User and Admin.) Server Testing: Stress/Load/ Performance Complete configuration and compatibility Regression Install Forced-Error Documentation Database Exploratory Third-party component integration Security
RAT FAST TOFT Regression Exploratory
209
12 201006 Ch09.qxd
210
5/29/03
8:59 AM
Page 210
Chapter 9 ■■
For information about install tests, please see Chapter 16, “Installation Tests.”
■■
For information about database tests, please see Chapter 14, “Database Tests.”
■■
For information about exploratory tests and an example of a third-party component, please refer back to Chapter 3, “Software Testing Basics.”
■■
For information about security testing, please see Chapter 18, “Web Security Testing.”
■■
For information about documentation tests, please see Chapter 15, “Help Tests.”
■■
For information about server testing, please see Chapter 12, “ServerSide Testing,” and Chapter 19, “Performance Testing.”
Step 4: Calculate Hours and Resource Estimates Multiply and total test times (refer to section “Developing a One-Page Test Plan” in Chapter 7, for details). Then calculate resource estimates. The one-page test plan is now complete!
Sample One-Page Test Plan Table 9.3 is a one-page test plan that addresses the special needs of the sample application. Note that time has been budgeted for issue reporting, research, meetings, and more. Table 9.3 Sample Test Plan NUMBER OF CYCLES
HOURS PER ESTIMATED CYCLE HOURS
MILESTONE
TYPE OF TEST
Alpha
RAT: Release Acceptance Test
12
0.5
6
FAST: Functional Acceptance Simple Test
12
2
24
TOFT: Task-Oriented Functional Test
2
80
160
Configuration Compatibility
1
80
80
12 201006 Ch09.qxd
5/29/03
8:59 AM
Page 211
Sample Test Plan Table 9.3 (continued) MILESTONE
Beta
Final
TYPE OF TEST
NUMBER OF CYCLES
HOURS PER ESTIMATED CYCLE HOURS
Install
1
40
40
Exploratory Testing
12
16
192
Total:
502
RAT: Release Acceptance Test
6
0.5
3
FAST: Functional Acceptance Simple Test
6
2
12
TOFT: Task-Oriented Functional Test
1
80
80
Server Tests (Performance, Stress, and Load)
2
100
200
Compatibility/ Configuration (Browser, Install)
1
80
80
Regression Testing
6
4
24
Install
1
40
40
Forced-Error Test
2
20
40
Documentation/Help (function and content)
1
80
80
Database Integrity Test
1
20
20
Exploratory Testing
6
16
96
Data Import
1
16
16
Third-party Component Integration
3
20
60
Security
1
40
40
Total:
791
RAT: Release Acceptance Test
2
0.5
1
FAST: Functional Acceptance Simple Test
2
2
4 (continued)
211
12 201006 Ch09.qxd
212
5/29/03
8:59 AM
Page 212
Chapter 9 Table 9.3 (continued) MILESTONE
Testing Project Management
TYPE OF TEST
NUMBER OF CYCLES
HOURS PER ESTIMATED CYCLE HOURS
TOFT: Task-Oriented Functional Test
1
80
80
Regression Testing
1
20
20
Exploratory Testing
1
16
16
Total:
121
40 20
40 20
Total:
60
Test Planning and Test Case Design Training
PROJECT TOTAL HOURS PROJECT TOTAL DAYS
1,474 184
Person Weeks (30 hrs/wk)
49
20 Percent Contingency Weeks
10
Total Person Weeks
59
Testers for Alpha
1.25
Testers for Beta
4.4
Testers for Final
2
Project Management
1
Bibliography Kaner, Cem, Jack Falk, Hung Q. Nguyen. Testing Computer Software, 2nd ed. New York: John Wiley & Sons, Inc., 1999. Kaner, Cem, James Back, and Bret Pettichord. Lessons Learned in Software Testing, New York: John Wiley & Sons, Inc., 2001. LogiGear Corporation. QA Training Handbook: Lead Software Test Project with Confidence. Foster City, CA: LogiGear Corporation, 2003. ———QA Training Handbook: Testing Computer Software. Foster City, CA: LogiGear Corporation, 2003. ——— QA Training Handbook: Creating Excellent Test Project Documentation. Foster City, CA: LogiGear Corporation, 2003.
13 201006 PP03.qxd
5/29/03
8:59 AM
Page 213
PA R T
Three Testing Practice
13 201006 PP03.qxd
5/29/03
8:59 AM
Page 214
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 215
CHAPTER
10 User Interface Tests
Why Read This Chapter? To effectively test the user interface (UI) design and implementation of a Web application, we need to understand both the UI designer’s perspective (the goals of the design) and the developer’s perspective (the technology implementation of the UI). With such information, we can develop effective test cases that target the areas within an application’s design and implementation that are most likely to contain errors. TOPICS COVERED IN THIS CHAPTER ◆ Introduction ◆ User Interface Design Testing ◆ User Interface Implementation Testing ◆ Usability and Accessibility Testing ◆ Testing Considerations ◆ Bibliography and Additional Resources
215
14 201006 Ch10.qxd
216
5/29/03
8:59 AM
Page 216
Chapter 10
Introduction This chapter explores the two primary classes of UI testing issues: (1) the design of UI components and (2) the implementation of UI components. Web technologies that are used to deliver UI components or controls (graphic objects that enable users to interact with applications) are also discussed, as are considerations for the effective testing of both UI design and implementation. User interface testing normally refers to a type of integration testing in which we test the interaction between units. User interface testing is often done in conjunction with other tests, as opposed to independently. As testers, we sometimes explicitly conduct UI and usability testing (see the “Usability and Accessibility Testing” section for more information on usability testing), but more often, we consider UI issues while running other types of testing, such as functionality testing, exploratory testing, and task-oriented functional testing (TOFT).
N OT E The discussions in this chapter focus on the testing of Web browserbased applications that run on a desktop, workstation, or laptop computer. For information on testing mobile Web applications, refer to Chapter 6, “Mobile Web Application Platform,” and Chapter 20, “Testing Mobile Web Applications.”
User Interface Design Testing User interface design testing evaluates how well a design “takes care of” its users, by offering clear direction, delivering feedback, and maintaining consistency of language and approach. Subjective impressions of ease of use and look and feel are carefully considered in UI design testing. Issues pertaining to navigation, natural flow, usability, commands, and accessibility are also assessed in UI design testing. During UI design testing, you should pay particular attention to the suitability of all aspects of the design. Look for areas of the design that lead users into error states or that do not clearly indicate what is expected of them. Consistent aesthetics, feedback, and interactivity directly affect an application’s usability, and should therefore be carefully examined. Users must be able to rely on the cues they receive from an application to make effective navigation decisions and understand how best to work with an application. When cues are unclear, communication between users and applications can break down. It is essential to understand the purpose of the software under test (SUT) before beginning UI testing. The two main questions to answer are: 1. Who is the application’s target user? 2. What design approach has been employed?
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 217
User Interface Tests
With answers to these questions, you will be able to identify any program functionality and design elements that do not behave as a reasonable target user would expect them to. Keep in mind that UIs serve users, not designers or programmers. As testers, we represent users, hence we must be conscious of their needs. (To learn more about Web UI design and usability, several useful books are recommended in “References and Additional Resources” at the end of this chapter.)
Profiling the Target User Gaining an understanding of a Web application’s target user is central to evaluating the design of its interface. Without knowing the user’s characteristics and needs, you cannot accurately assess how effective the UI design is. User interface design testing involves the profiling of two target-user types: (1) server-side users and, more important, (2) client-side users. Users on the client-side generally interact with Web applications through a Web browser. More than likely they do not have as much technical and architectural knowledge as users on the server-side of the same system. Additionally, the application features that are available to client-side users often differ from the features that are available to server-side users (who are often system administrators). Therefore, client-side UI testing and server-side UI testing should be evaluated by different standards. When creating a user profile, consider the following four categories of criteria for both client-side and server-side users: computer experience, Web experience, domain knowledge, and application-specific experience.
Computer Experience How long have the intended users been using a computer? Do they use a computer professionally or only casually at home? What activities are they typically involved with? What assumptions does the SUT make about user skill level, and how well do the expected users’ knowledge and skills match those assumptions? For client-side users, technical experience may be quite limited, though the typical user may have extensive experience with a specific type of application, such as instant messaging, spreadsheet, word processor, desktop presentation program, drawing program, or instructional development software. In contrast, system administrators and information services (IS) personnel who install and set up applications on the server-side probably possess significant technical experience, including in-depth knowledge of system configuration and scriptlevel programming. They may also have extensive troubleshooting experience, but limited experience with typical end-user application software.
217
14 201006 Ch10.qxd
218
5/29/03
8:59 AM
Page 218
Chapter 10
Web Experience How long have the users been using the Web system? Web systems occasionally require client-side users to configure browser settings. Therefore, some experience with Web browsers will be helpful. Are users familiar with Internet jargon and concepts, such as Java, ActiveX, HyperText Markup Language (HTML), eXtensible Markup Language (XML), proxy servers, and so on? Will users require knowledge of related helper applications such as Acrobat Reader, File Transfer Protocol (FTP), and streaming audio/video clients? How much Web knowledge is expected of server-side users? Do they need to modify the Practical Extraction and Reporting Language (Perl) or Common Gateway Interface (CGI) scripts?
Domain Knowledge Are users familiar with the subject matter associated with the application? For example, if the program involves building formulas into spreadsheets, it is certainly targeted at client-side users with math skills and some level of computing expertise. It would be inappropriate to test such a program without the input of a tester who has experience working with spreadsheet formulas. Consider as another example a music notation–editing application. Determining whether the program is designed for experienced music composers who understand the particulars of musical notation, or for novice musicians who may have little to no experience with music notation, is critical to evaluating the effectiveness of the design. Novice users want elementary tutorials, whereas expert users want efficient utilities. Is the user of an e-commerce system a retailer who has considerable experience with credit-card–processing practices? Is the primary intended user of an online real estate system a realtor who understands real estate listing services; or is the user a first-time homebuyer?
Application-Specific Experience Will users be familiar with the purpose and capabilities of the program because of past experience? Is this the first release of the product, or is there an existing base of users in the marketplace who are familiar with the product? Are there other popular products in the marketplace that have a similar design approach and functionality? (See the “Design Approach” section later in this chapter for information.)
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 219
User Interface Tests Table 10.1 Evaluating Target-User Experience EXPERIENCE GRADES None = 0 Low = 1 Medium = 2 High = 3 ATTRIBUTE
MINIMUM EXPERIENCE
Computer experience Web experience Domain knowledge Application experience
Keep in mind that Web applications are a relatively different class of application compared to a traditional software application or a mobile Web application. It is possible that you may be testing a Web application that is the first of its kind to reach the marketplace. Consequently, target users may have substantial domain knowledge but no application-specific experience. With answers to these questions, you should be able to identify the target users for whom an application is designed. There may be several different target users. With a clear understanding of the application’s target users, you can effectively evaluate an application’s interface design and uncover potential UI errors. Table 10.1 offers a means of grading the four attributes of target-user experience. User interface design should be judged, in part, on how closely the experience and skills of the target users match the characteristics of the SUT. TESTING THE SAMPLE APPLICATION Consider the target user of the sample application, which is designed to support the efforts of software development teams. When we designed the sample application, we assumed that the application’s target user would have, at a minimum, intermediate computing skills, at least beginning-level Web experience, and intermediate experience in the application’s subject matter (bug tracking). We also assumed that the target user would have at least beginning experience with applications of this type. Beyond these minimum experience levels, we knew that it was also possible that the target user might possess high experience levels in any or all of the categories. Table 10.2 shows how the sample application’s target user can be rated. (continued)
219
14 201006 Ch10.qxd
220
5/29/03
8:59 AM
Page 220
Chapter 10 TESTING THE SAMPLE APPLICATION (continued) Table 10.2 Evaluating Sample Application Target User EXPERIENCE GRADES None = 0 Low = 1 Medium = 2 High = 3
ATTRIBUTE
MINIMUM EXPERIENCE
Computer experience
2–3
Web experience
2–3
Domain knowledge
1–3
Application experience
0
Once we have a target-user profile for the application under test, we will be able to determine if the design approach is appropriate and intuitive for its intended users. We will also be able to identify characteristics of the application that make it overly difficult or simple. An overly simplistic design can result in as great a loss of productivity as an overly complex design. Consider the bug-report screen in the sample application. It includes numerous dataentry fields. Conceivably, the design could have divided the functionality of the bug-report screen into multiple screens. Although such a design might serve novice users, it would unduly waste the time of more experienced users—the application’s target.
Considering the Design The second step in preparing for UI design testing is to study the design employed by the application. Different application types and target users require different designs. For example, in a program that includes three branching options, a novice computer user might be better served by delivering the three options over the course of five interface screens, via a wizard. An information services (IS) professional, in contrast, might prefer receiving all options on a single screen, so that he or she could access them more quickly.
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 221
User Interface Tests
TOPICS TO CONSIDER WHEN EVALUATING DESIGN ■■
Design approach (discussed in the following section)
■■
User interaction (data input)
■■
Data presentation (data output)
Design Approach Design metaphors are cognitive bridges that can help users understand the logic of UI flow by relating them to experiences that users may have had in the real world or in other places. An example of an effective design metaphor includes Web directory sites that utilize a design reminiscent of a library card catalog. Another metaphor example is of scheduling applications that mirror the layout of a desktop calendar and address book. Microsoft Word uses a document-based metaphor for its word-processing program—a metaphor that is common to many types of applications. EXAMPLES OF TWO DIFFERENT DESIGN METAPHORS ■■
Figure 10.1 depicts an application that utilizes a document-based metaphor. It includes a workspace where data can be entered and manipulated in a way that is similar to writing on a piece of paper.
■■
Figure 10.2 exemplifies a device-based metaphor. This virtual calculator includes UI controls that are designed to receive user input and perform functions.
Figure 10.1 Document-based metaphor.
221
14 201006 Ch10.qxd
222
5/29/03
8:59 AM
Page 222
Chapter 10
Figure 10.2 Device-based metaphor.
TWO DIFFERENT APPROACHES TO CONVEY IDENTICAL INFORMATION AND COMMANDS ■■
Figure 10.3 conveys navigation options to users via radio buttons at the top of the interface screen.
■■
Figure 10.4 conveys the same options via an ActiveX or Java applet pull-down menu.
Neither design approach is more correct than the other. They are simply different. Regardless of the design approach employed, it is usually not the role of testers to judge which design is best. However, that does not mean that we should overlook design errors, especially if we work for an organization that really cares about subjective issues such as usability. Our job is to point out as many design deficiencies early in the testing as possible. Certainly, it is our job to point out inconsistency in the implementation of the design; for example, if the approach uses a pull-down menu, as opposed to using radio buttons, a pull-down menu should then be used consistently in all views.
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 223
User Interface Tests
Figure 10.3 Navigation options via radio buttons.
Figure 10.4 Navigation options via pull-down menu.
223
14 201006 Ch10.qxd
224
5/29/03
8:59 AM
Page 224
Chapter 10
Think about these common issues: ■■
Keep in mind that the UI tags, controls, and objects supported by HTML are primitive compared with those available through the Graphical User Interface (GUI) available on Microsoft Windows or Macintosh operating systems. If the designer intends to emulate the Windows UI metaphor, look for design deficiencies.
■■
If you have trouble figuring out the UI, chances are it’s a UI error, and your end users would have the same experience.
■■
The UI was inadvertently designed for the designers or developers rather than for the end users.
■■
The important features are misunderstood or are hard to find.
■■
Users are forced to think in terms of the design metaphor from the designer’s perspective, although the metaphor itself is difficult to relate to real-life experience.
■■
Different terms were used to describe the same functionality.
Ask yourself these questions: ■■
Is the design of the application under test appropriate for the target audience?
■■
Is the UI intuitive (you don’t have to think too much to figure out how to use the product) for the target audience?
■■
Is the design consistently applied throughout the application?
■■
Does the interface keep the user in control, rather than reacting to unexpected UI events?
■■
Does the interface offer pleasing visual design (look and feel) and cues for operating the application?
■■
Is the interface simple to use and understand?
■■
Is help available from every screen?
■■
Will usability tests be performed on the application under test? If yes, will you be responsible for coordinating or conducting the test? This is a time-consuming process, hence it has to be very well planned.
■■
Will accessibility tests be performed on the application under test? (See the “Usability and Accessibility Testing” section for more information.)
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 225
User Interface Tests
User Interaction (Data Input) Users can perform various types of data manipulation through keyboard and mouse events. Data manipulation methods are made available through onscreen UI controls and other technologies, such as cut-and-paste and dragand-drop. User Interface Controls
User interface controls are graphic objects that enable users to interact with applications. They allow users to initiate activities, request data display, and specify data values. Controls, commonly coded into HTML pages as form elements, include radio buttons, check boxes, command buttons, scroll bars, pulldown menus, text fields, and more. Figure 10.5 includes a standard HTML text box that allows limited text input from users, and a scrolling text box that allows users to enter multiple lines of text. Clicking the Submit button beneath these boxes submits the entered data to a Web server. The Reset buttons return the text boxes to their default state. Figure 10.5 also includes radio buttons. Radio buttons are mutually exclusive; that is, only one radio button in a set can be selected at a time. Check boxes, on the other hand, allow multiple options in a set to be selected simultaneously. Figure 10.6 includes a pull-down menu that allows users to select one of multiple predefined selections. Clicking the Submit button submits the user’s selection to the Web server. The Reset button resets the menu to its default state. The push buttons (Go Home and Search) initiate actions (e.g., CGI scripts, search queries, submit data to a database, hyperlinks, etc.). Figure 10.6 also includes examples of images (commonly referred to as graphics or icons) that can serve as hyperlinks or simulated push buttons. Figures 10.7 and 10.8 illustrate the implementation of several standard HTML UI controls on a Web page. Figure 10.7 shows the objects (graphic link, mouse-over link titles, or ALT, and a text link) as they are presented to users. Figure 10.8 shows the HTML code that generates these objects.
225
14 201006 Ch10.qxd
226
5/29/03
8:59 AM
Page 226
Chapter 10
Figure 10.5 Form-based HTML UI controls, including a standard HTML text box and a scrolling text box.
Figure 10.6 Form-based HTML UI controls, including a pull-down menu.
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 227
User Interface Tests
Figure 10.7 Graphic links, mouse-over text, and text links.
Image Map
Text Links
Mouse-Over Text Figure 10.8 HTML code for graphic links, mouse-over text, and text links.
227
14 201006 Ch10.qxd
228
5/29/03
8:59 AM
Page 228
Chapter 10
Standard HTML controls, such as tables and hyperlinks, can be combined with images to simulate conventional GUI elements such as those found in Windows and Macintosh applications (navigation bars, command buttons, dialog boxes, etc.). The left side of Figure 10.9 (taken from the sample application) shows an HTML frame that has been combined with images and links to simulate a conventional navigation bar. Dynamic User Interface Controls
The HTML multimedia tags enable the use of dynamic UI objects, such as Java applets, ActiveX controls, and scripts (including JavaScript and VBScript). Scripts. Scripts are programming instructions that can be executed by browsers when HTML pages load or when they are called based on certain events. Some scripts are a form of object-oriented programming, meaning that program instructions identify and send instructions to individual elements of Web pages (buttons, graphics, HTML forms, etc.), rather than to pages as a whole. Scripts do not need to be compiled and can be inserted directly into HTML pages. Scripts are embedded into HTML code with <SCRIPT> tags. Scripts can be executed on either the client-side or the server-side. Client-side scripts are often used to dynamically set values for UI controls, modify Web page content, validate data, and handle errors.
Frame and image-links simulating navigation bar.
Frame and image-links simulating navigation bar.
Table and forms simulating dialog box. Figure 10.9 Tables, forms, and frames simulating Windows-based UI controls.
14 201006 Ch10.qxd
5/29/03
8:59 AM
Page 229
User Interface Tests
There are a number of scripting languages supported by popular browsers. Some browsers support particular scripting languages and exclude others. JavaScript, produced by Netscape, is one of the more popular scripting languages. Other popular scripting languages include Microsoft’s version of JavaScript (Jscript) and Visual Basic Script (VBScript). Java. Java is a computing language developed by Sun Microsystems that allows applications to run over the Internet (though Java objects are not limited to running over the Internet). Java is a compiled language, which means that it must be run through a compiler to be translated into a language that computer processors can use. Unlike other compiled languages, Java produces a single compiled version of itself, called Java bytecode. Bytecode is a series of tokens and data that are normally interpreted at runtime. By compiling to this intermediate language, rather than to binaries that are specific to a given type of computer, a single Java program can be run on several different computer platforms for which there is a Java Virtual Machine (JVM). Once a Java program has been compiled into bytecode, it is placed on a Web server. Web servers deliver bytecode to Web browsers, which interpret and run the code. Java programs designed to run inside browsers are called applets. When a user navigates to a Web site that contains a Java applet, the applet automatically downloads to the user’s computer. Browsers require Java bytecode interpreters to run applets. Java-enabled browsers, such as Netscape Navigator and Internet Explorer, have Java bytecode interpreters built into them. Precautions are taken to ensure that Java programs do not introduce malicious code to the users’ computers. Java applets must go through a verification process when they are first downloaded to users’ machines to ensure that their bytecode can be run safely. After verification, bytecode is run within a restricted area of RAM on users’ computers. In addition, unlike a Java application, which uses a JVM on the target operating system, and ActiveX, Java applets cannot make application or system function calls. This restriction was designed to ensure better security protection for client users. ActiveX. ActiveX is a Windows custom control that runs within ActiveXenabled browsers (such as Internet Explorer), rather than off servers. Similar to Java applets, ActiveX controls support the execution of eventbased objects within a browser. One major benefit of ActiveX controls is that they are components, which can be easily combined with other components to create new, features-rich applications. Another benefit is
229
14 201006 Ch10.qxd
230
5/29/03
8:59 AM
Page 230
Chapter 10
that once users download an ActiveX control, they do not have to download it again in the future; ActiveX controls remain on users’ systems, which can speed up load time for frequently visited Web pages. Disadvantages of ActiveX include that it is dependent on the Windows platform; and some components are so big that they use too much system memory. Furthermore, ActiveX controls, because they reside on client computers and generally require an installation and registration process, are considered by some to be intrusive. Figure 10.10 shows a calendar system ActiveX control. Figure 10.11 shows the HTML code that generated the page in Figure 10.10. An HTML