49• Professional Communications
49• Professional Communications Advanced Publishing Technologies and Services Abstract | Full Text: > PDF (134K) Corporate and Organizational Communication Abstract | Full Text: > PDF (167K) Data Presentation Abstract | Full Text: PDF (191K) Document and Information Design Abstract | Full Text: PDF (194K) Electronic Document Production Abstract | Full Text: PDF (347K) Engineering Notebooks Abstract | Full Text: PDF (72K) Information Search and Retrieval Abstract | Full Text: PDF (64K) Intellectual Property in Engineering Abstract | Full Text: PDF (121K) International Communication Abstract | Full Text: PDF (115K) Management of Documentation Projects Abstract | Full Text: PDF (683K) Oral Presentations Abstract | Full Text: PDF (81K) Professional Journal Articles Abstract | Full Text: PDF (192K) Telecommunication Methods Abstract | Full Text: PDF (940K) Video Production Abstract | Full Text: PDF (131K)
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...%20ENGINEERING/49.Professional%20Communications.htm15.06.2008 20:15:47
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5601.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Advanced Publishing Technologies and Services Standard Article Pamela N. Novak1, Gary R. Danielson1, Alan E. Turner1 1Battelle Pacific Northwest National Laboratory Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5601 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (134K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Desktop Publishing The Future of Print Publishing Online Publishing The Future of Online Publishing Conclusion About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5601.htm15.06.2008 20:17:37
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES The printing press was invented by Johann Gutenberg in 1455. This device launched the publishing industry, which has played an indispensable role in every aspect of human affairs. Over the centuries, print publishing continued to evolve as improvements were made in equipment, materi als, and distribution of the product. However, for more than 500 years, the basic paradigm remained unchanged: the publication process was a linear one from author to publisher to reader. The advent of desktop computers set the stage for the next revolution in publishing, one whose full effects have yet to be felt: electronic publishing. This phenomenon is actually two revolutions that occurred in rapid succession: desktop publishing and online (Internet) publishing. Just as political revolutions often give rise to a period of chaos—a decentralization of control—that lasts until a new government is established, each major advance in electronic publishing has created similar chaos. In the traditional publishing model, for example, the publisher’s reputation aids the reader in judging the accuracy and credibility of the publication. A book published by a major university press, for example, has more immediate credibility than the same book published by an obscure or vanity press. Likewise, the library has long provided assistance to the reader by organizing and cataloging the vast and diverse contents of its collection. Electronic publishing gives authors more control over the creation and distribution of their work, but a burden is then placed on the reader to find content that is of authenticated value. Over time, this burden becomes intolerable, and standards are developed to assist the reader in validating the credibility of published information and to manage the sheer volume of this information.
Desktop Publishing Desktop publishing is the use of desktop computer technology to design “camera-ready” documents (newspapers, brochures, catalogs, and so on) that integrate text and graphical elements. The desktop publishing revolution began in the mid-1980s, when computers and printers became small, affordable, and powerful enough to be able to perform many functions, including typesetting, page layout, graphic design, photo retouching, and printing on high-quality paper. The key elements that came together to enable desktop publishing (see Fig. 1) were • • • •
Personal computers using direct manipulation interfaces and bitmapped graphics displays Support in personal computers for typography Affordable laser printer technology that enabled high-quality output Page layout programs that exploited these devices to bring traditional pre-press activities into a WYSIWYG (“what you see is what you get”) interactive environment.
Desktop Computers. The first desktop computer to achieve popularity was the Macintosh, a product of Apple Corporation. Its easy-to-learn, easy-to-use commands and highly graphical screens were key to its appeal. Before long, IBM introduced a rival product, the personal computer (PC). Many users still preferred 1
2
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
Fig. 1. Figure 1. Today’s desktop publishing was shaped through a collaboration of personal computer, laser printer, typography, and page layout program technologies.
the “Mac,” however, because the PC’s disk operating system (DOS) was text-based and difficult to learn; its command screens resembled those used in mainframe computing. Oddly enough, the Mac’s superior usability caused it to be dismissed as a “toy” by many computer industry professionals. IBM and other companies continued to manufacture PCs, which gained a strong foothold in businesses and government. Graphic artists, however, were steadfast Mac users, and as a result many of the more sophisticated graphics software packages were developed for the Mac rather than the PC. In the early 1990s, the Microsoft Corporation released software for the PC that radically changed PC computing. Known as Windows, this tool gave PC users the same advantages enjoyed by Mac users: graphical screens, pull-down menus instead of typewritten commands, the ability to have multiple applications open on the desktop screen at one time, and the possibility to transfer information among these applications. Windows quickly became the standard interface for PCs, Windows-compatible software proliferated rapidly, and desktop publishing grew even faster. Similar changes occurred in other important desktop platforms (e.g., UNIX and X-Windows). Thus, in less than ten years, a technology originally dismissed by many as not good enough for professional work has become the standard tool of designers and publishers worldwide. Desktop Color Publishing. One aspect of true professional-quality desktop publishing output remains elusive. While desktop printers are now available with sufficient dots per inch (dpi) to provide clear, crisp type in a wide variety of fonts, desktop color printing is not yet equal to that achieved by offset printing. On the surface, the solution appears simple enough: obtain new output devices and new software that handles color. The technological difficulty lies in getting consistent, predictable color throughout the design and production process. The strategies to address this problem are known collectively as color management.
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
3
Color Management. Fundamentally, WYSIWYG color is an unattainable ideal. Some colors you see on a red, green, blue (RGB) additive color monitor cannot be achieved in cyan, magenta, yellow, black (CMYK) subtractive color printing. Also, there are colors you can print that you cannot see onscreen. In addition, every color input and output device has its own characteristic limitations in gamut, contrast, and distortions. The objective of color management is to provide the best predictable approximation of the original scene or design colors given the limitations of screen or ink. Color management technology has four primary components: standard formats to describe device profiles, support for color management within the operating system of computers, the ability to create profiles for specific devices, and application software awareness of color management. The device profile and its support are the key to color management. A profile describes the color response characteristics of a device. Device manufacturers provide a “standard” profile for a “typical” sample; for example, a particular monitor model. Software and hardware products are also available to create custom profiles tuned to a particular device in use. A color-transformation engine (built into the operating system) uses the CIELAB color model to represent the behavior of real-world acquisition and imaging devices as described by their profiles. (CIE stands for Commission Internationale de l’Eclairages, an international color standards group. LAB refers to the three aspects of color as seen by the human eye: L is relative lightness; A is relative redness–greenness, and B refers to relative yellowness–blueness.) This transformation engine uses CIE to correct the distortions of, and translate between, for example, the RGB color space of a particular monitor and the CMYK space of a particular color printer. Color transformation engines will continue to improve as will the quality of color management support within the major operating systems. Color Output Devices. There are several families of technology for color output devices, including color ink jet, thermal wax transfer, color laser, and dye sublimation (see Fig. 2). These technology families are listed in rough order of increasing cost and output quality. The important distinguishing factor is the manner in which the coloring agent is mixed and delivered to the print medium to form one of the millions of possible color mixtures. Ink-jet printers spray microscopic ink drops from a head onto the print medium. Thermal wax printers melt a wax to a liquid form that is applied to the print medium. Color lasers produce a static electrical charge that is used to attract toners in each component color, which are then fused to the medium. Dye sublimation printers heat dyes to a gas, which is mixed while traveling from the print head to the print medium surface. Hi-Fi Color. Even with the use of color management technology, there remain significant limitations in four-color (CMYK) process printing. Hi-Fi color systems (for example, the six-color Hexachrome process) produce a significantly larger range of colors and offer enhanced control of hue, saturation, and blending. This trend has not yet been widely adopted, although support for enhanced color processes is already appearing in desktop publishing software.
The Future of Print Publishing Many pundits have predicted the demise of traditional print media in the face of online publishing. However, most experts believe that the “paperless society” will remain only a concept. Just as radio and television have not killed the magazine and newspaper industry, the Internet will not kill the print publishing industry. Paper, often underrated as a communication medium, will not be eliminated by the growth of electronic media. It remains inexpensive, extremely portable, and capable of carrying very high-resolution images. Desktop publishing progress should be expected to continue to lower the barriers to paper publication as it has done for over a decade. As it becomes easier to publish on paper—especially in color—more and more people will do so who in the past could not because of the high cost and high level of expertise required.
4
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
Fig. 2. Figure 2. There are four main printer technologies used to produce color output: ink jet, thermal wax, color lasers, and dye sublimation. Each printer’s coloring agent is mixed and delivered to the media in a different way, illustrating the distinguishing factor between each technology.
One major limitation of electronic publishing, however, is its reliance on proprietary software and equipment that are always in the process of becoming obsolete. Also, as the cost of paper and shipping continues to rise, the commercial publishing houses, the federal government, and other large businesses that gener-
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
5
ate a large amount of documentation need to economize by streamlining the publishing process itself. These organizations are turning to industry standards for the solution. SGML: An Attempt to Standardize Markup. In 1986, the International Standards Organization (ISO) published Standard 8879, Standard Generalized Markup Language (SGML). A programming language for the electronic markup of documents, SGML enables document reuse and interchange. SGML code is based on ASCII text with a few simple markup conventions. It can be read and understood both by computers and by humans. SGML is a powerful tool for publishers because it distinguishes between the information content and structure of a document—both of which are concerned with meaning—and its format—which concerns how the information is displayed. The distinction can perhaps be clarified by a familiar example. In most textprocessing programs, the author or compositor can mark, or tag, one or more characters so that they print on the page (or display on the screen) in boldface type. What is the significance of the boldface font? Is the author intending to emphasize the information? Does the information constitute a label or a heading? The human reader may deduce this from context, but the computer cannot. The markup command is specific; that is, it tells the text-processing software how to display that particular piece of information. However, the same code could be used many different times to mean different things in the same document. The “generalized” aspect of SGML means that markup is used consistently throughout a document or document set. The tags that are used to mark up parts of a document have a consistent meaning that is defined in a separate file, called the document type definition (DTD). One DTD can apply to many documents. This is, in fact, where SGML is most useful: handling a large documentation set that has a consistent structure. To produce an SGML document requires yet a third piece, the Document Style Semantics and Specification Language (DSSSL), which “translates” the tags into a prescribed format. Using different DSSSLs, a publisher can output the same document in a variety of formats, such as hardcover, paperback, large print, and Braille. Although SGML is used by both government and commercial entities, the upfront investment is substantial. Creating a DTD, for example, requires a thorough document analysis process that is not economical or practical unless either (1) publishing is the primary business of the company or (2) the documentation involved is both vast and critical to operations (e.g., aircraft maintenance manuals). There is also a need to train authors and editors in the proper tagging of documents. So far no commercial off-the-shelf WSIWYG tool has emerged to assist in (or better yet, eliminate) the learning curve. To use SGML effectively requires an organizational culture of appreciation for rules and consistency that has grown rarer, thanks in large measure to the desktop computer which has largely decentralized the publishing process in many companies. As a result, the use of SGML is limited, despite its many advantages.
Online Publishing The first wave of the electronic publishing revolution, desktop publishing, has opened up many opportunities for authors and publishers, but it still adheres to one aspect of the Gutenberg paradigm—the product is still a static, paper publication. Printed publications have several inherent limitations: • • •
The information in printed publications becomes out of date immediately. It is cumbersome for the reader to follow cross-references within and among publications. It is even harder to search through one or more print documents for a particular fact or concept. Printed publications occupy a lot of shelf space.
These limitations have become less tolerable as the number of publications and the need for the information they contain continue to increase astronomically, to the point where terms such as “information overload,” “information anxiety,” and “info-glut” have become common parlance since the late 1980s.
6
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
Online (or Internet) publishing, the second wave of the electronic publishing revolution, promises significant advantages over traditional paper publishing: • • • • • • •
Quick access to information Interactive capabilities Ready integration of multimedia Fast, economical distribution The ability to search and index text Easy, real-time updating of materials Greater consumer control over the information experience.
Online publishing can appear in many forms, from an interactive compact disk with read-only memory (CD-ROM) or digital video disk (DVD) title to a document library on the World Wide Web. These delivery forms can be readily combined to achieve a balance between volume and timely content. The Web, in particular, has launched viable multimedia publishing. New electronic presentation formats have facilitated the inclusion of sound, video, and animation in a publication. The associated production issues and challenges of this medium are different from those of print publishing, but no less pressing. In fact, the info-glut problem has been compounded by the rapid growth in Internet publishing, particularly via the World Wide Web. The Web Revolution. In the early 1990s, publishing gained a vital new outlet in the form of the World Wide Web. A subset of the Internet, the Web offered a major advance over the primarily text-based means of electronic interchange available until then. The Web, accessible at the desktop computer via a special viewer or “browser,” is able to display inline graphics and offer media as well as text; it also makes use of hypertext—a means of linking among parts of a document or to other documents. This advance was made possible by the development of the hypertext markup language (HTML) by Tim Berners-Lee in 1991. HTML caught on very quickly because it is easy to learn and inexpensive to use in publishing. The proliferation of web sites has truly been explosive. In October 1998 it was estimated that nearly half a million such sites exist, and growth shows no sign of slowing. Web sites can be updated frequently, and there are no incremental distribution costs. In many cases, companies have ceased to produce paper versions of certain types of documents (newsletters and policy manuals, for example) and have achieved measurable cost savings. Office workers have become accustomed to using their desktop workstations as a primary information resource, and as a result corporate intranets (internal networks) are flourishing. URLs. Documents contained on the Web are published at specific “addresses” called uniform resource locators (URLs). A company or individual sets up a web server, purchases a domain name, and establishes a homepage or website. A typical URL is http://www.mycompany.com. Its components can be identified as follows: http://. This tells the browser that the hypertext transfer protocol is being used. www. The name of the web server. In actuality, any name can be used for the server, but most web publishers choose www because it has become so common that it is intuitive for those who do not know the URL and try to guess at it. mycompany.com. The domain name purchased for the server. The first piece, mycompany, is usually the name or initials of the organization. The second piece, .com, indicates that the organization is a commercial entity. Other common domains are .org (denoting nonprofit or other non-commercial entities) and .gov (denoting government organizations). New domains are being established, because the demand for domain names shows no sign of decreasing and, like telephone numbers, the supply will eventually run out!
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
7
HTML. HTML, the hypertext markup language, is by far the most important content type on the web. Based on SGML, HTML is a fairly simple system of marking up American standard code for information interchange (ASCII) text with tags that specify either presentation information (bold, centered, font change) or structural information (title, heading, emphasis). HTML has evolved rapidly to include features for fairly precise and elaborate presentation (much like a page description language); structural features have evolved less rapidly. An essential feature of HTML is that it supports hypertext or more generically hyperlinks. This feature allows parts of one page to reference other pages, and the viewer may follow the link with a simple click of the mouse. Soon few people will remember that Internet information was once essentially unreachable without a painful and protracted exploration with a number of isolated tools. The important feature of HTML is that it may serve as a container to hold other formats of information, such as inline graphics, movies, or sound. Although the hyperlink allows for reaching other kinds of media, it is the inline embedding that makes a diverse multimedia information source feasible. As HTML continues to evolve, mechanisms are added that allow dynamic behaviors to occur. These include active elements such as scripting or Java applets, which are essentially small computer programs embedded in the HTML code. While HTML itself is nonproprietary, a seemingly infinite selection of HTML authoring tools are available at prices ranging from free to several hundred dollars. Some of these products produce HTML files that can only be edited later using that particular tool. The disadvantage of this approach should be obvious. More Content Types. As noted previously, an HTML file may link to or display inline other types of files and other media. Some prevalent examples are graphics, audio, video, animation, and virtual scenes. Most of these formats and media types are proprietary and require specific “plug-ins” or “helpers” to be used. One important example is the portable display format (.pdf), a product of Adobe Corporation. In its early versions, .pdf was an online page description format. Its primary use was to capture page images of formatted documents; these images would display on the screen and print on any printer as “exact replicas” of the original source. A .pdf file is created directly from the source information or from a PostScript (a page description language that Adobe created) rendering of the original source. This format is used extensively for documents for which it is desirable to preserve the original format (newsletters, forms, and brochures, for example). Recent versions of .pdf allow for the inclusion of hyperlinks, searching, editing, and multimedia. Leaving Internet publishing for the moment, .pdf is becoming an important format in the desktop publishing world—many service bureaus request .pdf documents for the highest-quality reproduction, and many conferences require submissions to be in .pdf format. Much as PostScript has become the printing language of the world, .pdf may be becoming the electronic portable document format of choice. There are a wide variety of raster image formats used on the Web, the most important being graphic image format (GIF) and Joint Photographic Experts Group (JPEG), which are both understood by all browsers. The tiff format (similar to .pdf but nonproprietary) is also frequently used. There are also a variety of video formats prevalent on the Web, including proprietary formats such as Quicktime and RealVideo, as well as industry standard formats such as Moving Pictures Experts Group (MPEG) and Audio Video Interleave (AVI). Video on the web is a highly technical subject in its own right, requiring the publisher to have detailed knowledge of the target audience, the availability and popularity of the required “viewers” in the audience, the likely network connectivity of the audience, and the nature of content and compression effects on quality. Enriching HTML: HTML 4.0 and XML. As noted earlier, while the flexibility of HTML is a great asset, its lack of rigor eliminates one key advantage of SGML; namely, the separation of format and structure markup. At first, this was not seen to be a problem when the Web was still largely experimental. Now that Web publishing is key to nearly every government agency and corporation, however, the realization that websites are costly to maintain has begun to generate interest in Web management tools. Particularly for corporations in competitive industries, it is important to be able to update not only content but appearance. Web technology advances quickly and thus also what is considered “cutting-edge” website design. Because HTML does not rigorously enforce the use of structure codes rather than format codes
8
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
(e.g., there is a code for emphasis, but many Web developers use the code for bold instead), it is a considerable expense to change the design of a set of HTML files. Published in 1998, HTML version 4.0 reintroduces some of the distinction between format and structure markup. Certain codes are no longer allowable; new codes are introduced. Once the commercial browsers adopt the HTML 4.0 standards, websites that use the codes as intended will reach a greater range of audience and create content of higher value. In conjunction with HTML 4.0, a capability called cascading stylesheets (CSS) has been developed by the World Wide Web Consortium (W3C). CSS is an output specification, essentially a DSSSL for HTML documents. It is a separate file that “translates” the HTML structure codes into the desired display format. The same CSS can be applied to all the files on a website—for that matter, to all the files on a web server. This capability will make it easier for organizations to deploy a consistent style across their websites and will greatly decrease the cost of updating that style. The CSS, by further “abstracting” the presentation, increases the utility of HTML to operate over a large range of consumer devices, such as televisions and personal digital assistants (PDAs). An even greater stride toward SGML’s original intent is promised with XML, or Extensible Markup Language. XML is a subset of SGML. Its goal is “to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. XML has been designed for ease of implementation and for interoperability with both SGML and HTML” (from the XML 1.0 specification published in February 1998 by W3C, http://www.w3.org/TR/REC-xml). It is hoped that XML will greatly improve online searching by enabling the tagging of content for meaningful indexing. If this indeed proves to be the case, XML may help solve the information overload problem that has grown much worse in recent years.
The Future of Online Publishing The mere act of publishing does not necessarily ensure that information reaches its intended audience. The medium chosen for publication has always had a profound impact on the audience’s ability to access the information. Distribution channels, shelf placement, and advertising have also had a major impact on audience exposure to published material. Online publishing removes some impediments but introduces others. Accessibility. When preparing a publication for access over the Internet, a publisher makes many decisions that affect the audience discovery of and access to the work. The fundamental limitation is that access to the Internet and the Web is by no means universal. Even those who are connected are limited by workstation type, operating system version, browser type, the availability of plug-in applications needed to run certain file formats, network speed, and audio or motion video capability. If the document requires the use of any technology other than the lowest common denominator version of HTML, the Web audience is limited to some degree. In general, the more technology incorporated into the Internet publication, the greater the likelihood that some readers will not have access. Use of the lowest common denominator may not take advantage of the power of the medium and many authors decide to incorporate advanced features regardless. With proper knowledge and care, however, Internet documents can be authored in such a way as to present themselves to the reader at the level of the technology being used to access them. For instance, an HTML document may use JavaScript to enhance document navigation only if the browser supports JavaScript. A static graphic image may be replaced by animation if the browser plug-in or capability is detected. Other steps can be taken to increase the audience. Readers can be told what technology is required for proper viewing and how to access that technology. It is common to see a website with links to browser download sites if a specific browser or browser plug-in is required by the content. If the reader is given clear instructions on what technology is required, where to locate it, and how to install it and is sufficiently motivated by the document content to go through the trouble of installing new software, he or she will do what is necessary. The author of an online publication also needs to exercise caution about adopting new technology too soon. In many cases, what is leading-edge technology today will become “baseline” tomorrow. This has happened
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
9
repeatedly as new versions of HTML have been published and adopted. Only years after HTML first began to support tables, however, could an author count on the majority of his or her audience having table support in their browsers. Likewise, not all leading-edge technologies make it into the mainstream. The incorporation of new technologies too early can cause high audience frustration and should be avoided. At the same time, it appears that the Internet audience has an unusually high tolerance for frustration. They have been conditioned to slow access speeds, browsers that do not behave consistently, and unintelligible error messages. Readers tend to tolerate these problems as long as the information they ultimately obtain is of value to them. Thus, just as with other media—print, television, music—the quality of the content is more important than the technology used to deliver it. It is becoming increasingly possible for those with physical disabilities related to sight or mobility to access information using a web browser. Tools that read text aloud or respond to voice commands are becoming more sophisticated and available. The success of these tools, however, depends largely on the care with which websites are created. If HTML tags are used properly and a few other conventions are observed, Web information can be easier for the disabled to access than is often the case for print. Much remains to be done in this area. Search Tools. One particularly frustrating experience for the online reader is the process of trying to find the sought-after information. There is not much use in publishing high-quality content if it is impossible to find. However, there is no Internet equivalent of the Library of Congress! Search engines provide a valuable service in helping the Internet audience to find what they are looking for, but the search engines themselves can use help. It is estimated that even the most comprehensive of the commercial search engines (Alta Vista, Lycos, and InfoSeek, for example) have been able to index only about 40% of existing websites. The author’s knowledge of how these search tools work can greatly improve the likelihood that his or her documents will be found. Some search engines index their catalogs based on content, others look for HTML meta tags (codes that contain information about the document that does not display in the browser window), still others use a combination of the two and a human editor to provide categorization. As discussed earlier, XML offers the possibility for vastly richer tagging of content and, as a result, vastly improved searchability of XML-tagged documents. Of course, it is up to the author to exploit this and other tools to assist the reader. Convergence. Another emerging trend is the convergence of technologies that will enrich the audience’s experience. These include the Internet, broadcast, and telephone systems. With convergence comes an environment in which the audience will have access to a book, the movie based on the book, biographies of the author, and conference calling with the author, all with no clear lines of technological distinction and all using the same hardware devices (see Fig. 3). Perhaps the most interesting recent example of convergence is the development of electronic paper and electronic books. While these devices are still in their early stages, the basic concept is that the physical paper or book can display content that it receives electronically. This technology allows for updated content while taking advantage of an output medium that is familiar and easy to use. Electronic “Records”. Information stability and reliability on the Internet are growing problems. A cultural shift on the part of the publisher and consumers may be required before progress is made in this area. If the source of record for a publication is online, it has to be available essentially forever at a known location (perhaps a website of record). Just as we expect that a library will have a book on the shelf today and in the future, we want nothing less from the Internet. Likewise, we would not expect a specific edition of a book to change content, and we expect the same from a website of record. A numbering standard or convention, perhaps similar to the ISBN (International Standard Book Numbering) assigned to a book, may provide a solution. Government, commercial publishers, and academic institutions are the logical candidates for leadership in this area. Credentials. The credibility of a document on the Internet is always suspect. In print publishing, readers rely on the credibility of the publishing house. Publishing companies offer standards of peer review, editing, and reputation that readers come to trust. On the Internet it is difficult to know where the document came from (the URL is often not enough), who authored the document, and whether the named author actually
10
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
Fig. 3. Figure 3. The development of the Internet with new emerging technologies will soon unite broadcast and telephone systems.
created the content. Public Key Infrastructure (PKI), originally developed to enable secure online business transactions, may help address these problems. With PKI, authors and companies would be able to sign Internet content digitally. By this means, readers can be assured that what they are reading has not been altered and that the named authors and institutions are the true source. When the reader views a document, embedded in the document will be information necessary for the browser to check the digital signatures with independent and credible certificate authorities who hold credentials in trust.
Conclusion The electronic publishing revolution has moved quickly through its period of chaos. Readers are demanding order in the form of rules, conventions, and standards. These are now being provided in a number of areas and are quickly being developed in others. The future is bright for inexpensive and timely access to a vast array of information.
BIBLIOGRAPHY 1. D. Connolly J. Bosak, Extensible Markup Language (XMLTM ), 1998. Online: http://www.w3.org/XML/. 2. C. F. Goldfarb The Standard Generalized Markup Language (ISO 8879), Geneva: International Organization for Standardization, 1986. 3. D. Kosiur Understanding Electronic Commerce, Redmond, WA: Microsoft, 1997, pp. 75–78. 4. H. W. Lie Web Style Sheets, 1998. Online: http://www.w3.org/Style/. 5. D. Raggett A. Le Hors W3C User Interface Domain, 1998. Online: http://www.w3.org/Markup/.
ADVANCED PUBLISHING TECHNOLOGIES AND SERVICES
11
6. D. Raggett A. Le Hors, I. Jacobs HTML 4.0 Specification, 1998. Online: http://www.w3.org/TR/REC-html40/. 7. S. St. Laurent XML, A Primer, Foster City, CA: MIS, 1998, pp. 178–205. 8. E. van Herwijen Practical SGML, Geneva: Kluwer Academic, 1990.
PAMELA N. NOVAK GARY R. DANIELSON ALAN E. TURNER Battelle Pacific Northwest National Laboratory
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5602.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Corporate and Organizational Communication Standard Article Michael B. Goodman1 1Fairleigh Dickinson University, Madison, NJ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5602 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (167K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases ❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
Abstract The sections in this article are Overview of Corporate and Organizational Communication Key Corporate Communication Functions Listed Alphabetically Corporate Communication—Meeting the Challenge of the Future Keywords: advertising; change communications; communication process; communication with the media; corporate citizenship; corporate culture; corporate policy; correspondence; crisis management; ethics; executive communication issues; meeting the press; mission statement; negotiation; public relations; TQM About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5602.htm15.06.2008 20:17:53
CORPORATE AND ORGANIZATIONAL COMMUNICATION
341
CORPORATE AND ORGANIZATIONAL COMMUNICATION OVERVIEW OF CORPORATE AND ORGANIZATIONAL COMMUNICATION Corporate communication (1,2) is the term used to describe a variety of management functions related to an organization’s internal and external communications. Depending on the organization, corporate communications include such traditional disciplines as public relations, investor relations, employee relations, community relations, media relations, labor relations, government relations, technical communications, training and employee development, marketing communications, and management communications. Many organizations also include philanthropic activity, crisis and emergency communications, and advertising as part of corporate communication functions. Technologies, such as the Internet, underscore the global character of communication. In practice, corporate communication is a strategic tool for the corporation to gain a competitive advantage. Corporations use it to lead, motivate, persuade, and inform employees, stockholders, and the public as well. Understanding corporate communication provides the vision and outcome expectations a company requires in an information-driven economy for strategic planning. Corporate communication is more art than science. Its intellectual foundations began with the Greeks and Romans with rhetoric. Its body of knowledge is interdisciplinary, drawing on the methods and findings of anthropology, communications, language and linguistics, management and marketing, sociology, and psychology. Strategic Importance of Corporate Communication Communication has become vital to business growth since our economy has firmly based itself on information, rather than manufacturing. Customers, employees, investors, suppliers, and the general public now expect a high level of communicaJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
342
CORPORATE AND ORGANIZATIONAL COMMUNICATION
tion and candor from the companies that operate in their community. Even in an environment that extols the virtues of decentralization to meet customer’s needs quickly, the value of a central management structure for communication makes sense for many organizations, particularly those with global operations. A central group responsible for communications develops, projects, and maintains the corporation’s image and culture. A communication group within an organization sets policy and guidelines to meet the strategic goal of developing and perpetuating a corporate image and culture, to project consistent messages, and to communicate with its various publics on a routine basis and in emergency and crisis situations. Much has been written about corporate culture (3) and its influence on the behavior of employees. Often a company is described in cultural terms, its shared values and beliefs. These beliefs are the center of advertising campaigns and motivational programs for employees. A strong corporate culture promotes a recognizable and positive perception of the company among its suppliers, vendors, and customers. The equity a company culture amasses is then part of its value as a brand-name product, stimulating customer loyalty. A culture cannot be imposed on people, but it can be nurtured. How an organization communicates with its employees, stockholders, its external audiences, the press, and foreign customers brings its values to life. Some signs—the company buildings, vehicles, employee appearance—are easy to observe. Others are harder to recognize at a glance: attitudes such as an innovative spirit, a commitment to community, and an understanding of the coexistence of fair play and competition. These forces shape the corporation, and they are manifested in the organization’s communications. Corporations that do not value communication highly are doomed to wither. From the perspective of an anthropologist, Corporate Communication encodes the corporate culture and promotes the following: • a strong corporate culture • a coherent corporate identity • a reasonable corporate philosophy and clear corporate vision • a genuine sense of corporate citizenship • an appropriate and professional relationship with the press • a quick and responsible way of communicating in a crisis or emergency • an understanding of communication tools and technologies • a sophisticated approach to global communications Corporate Communication Philosophy Corporate visions, mission statements, and company philosophies are the products of executives who recognize the strategic value of a clear statement of what the corporation stands for, its goals, and its practices. Clear understanding and articulation of the company mission and vision is the cornerstone for building an image in the minds of employees and the public.
Organizations committed to communicating with stockholders, employees, and the community have a definite communication philosophy. Companies may call it their communication vision, policy, or their mission statement. The philosophy is articulated through statements of commitment to employees, customers, and other stakeholders. The written statement of corporate commitment to goals and values is often the external manifestation of the communication philosophy. It is not necessary for a written statement to exist to have a philosophy, but if the written statement does not represent corporate behavior and values, its hollowness is apparent to everyone. For companies operating globally, a strong corporate communication philosophy offers the foundation for a code of ethics that applies throughout the world. Most corporations have a code of ethics with a section on international business ethics. The Westinghouse Code of Ethics & Conduct (4) offers a fine model: Employees conducting business internationally are required to comply with all applicable U.S. and foreign laws and regulations. Compliance with such laws, as well as company standards (including this Ethics Code), is required even if they seem inconsistent with local practice in foreign countries, or would place the company at a competitive disadvantage. The penalties for noncompliance can be severe, both for the company and for involved individual employees. KEYS TO COMPLIANCE: Don’t Make or Offer Unlawful Payments or Bribes—The Foreign Corrupt Practices Act bars the payment or offering of anything of value to officials or politicians of foreign governments, and others, to obtain or retain business . . . Abide by Import/Export Controls . . . Adhere to U.S. Economic Boycott Laws . . . Refer International Trade Law Questions to the Law Department . . .
Individual Skills and Talents Organizations use personality profiles to find the right person for the job. A corporate communicator should have the following qualifications: • • • • •
written and speaking communication expertise understanding the communication process interpersonal skills face-to-face and telephone media savvy an understanding of customer, stakeholder, and community needs • curiosity • active listening skills • an understanding of advocacy
In addition, corporate communications demand an ability to solve problems in groups, to understand media and communication technology, to work ethically, and to feel comfortable in a global business environment. The elements of communi-
CORPORATE AND ORGANIZATIONAL COMMUNICATION
Brainstorm ideas and graphics
Determine goals and objectives
343
Analyze audience needs and expectations
Gather and analyze information
Arrange information for maximum effect
Write a draft of the whole document
Allow time to COOL OFF
Put important ideas up front Revise, rearrange, edit
Prepare final draft including graphics
Figure 1. The writing process is dynamic, incorporating opportunities to evaluate and improve the text and graphics.
Submit
cation continue to exert substantial influence in all transactions from simple customer questions of frontline sales and retail personnel to the pressure negotiations involved in a multinational merger. Corporate communication has evolved into a complex profession, yet writing remains the central talent to create any communication in a corporate context. No matter what the medium of the final message, the ideas more often than not begin in writing. Understanding the writing process is fundamental to communication and media applications.
The writing process (see Fig. 1) also serves as a model for the communication process and emphasizes three main areas of analysis: • audience • context • content (message) Corporations routinely target a message for a particular audience, meeting their needs while achieving the company goals.
344
CORPORATE AND ORGANIZATIONAL COMMUNICATION
Successful communication puts human interaction at its center and in a collaborative corporate environment seeks to win both for the organization and for its customers. The type of person who can collaborate is someone who sees an issue from several perspectives and creates a message based on analysis rather than on personal bias. The ability to see a message as a graphic image, or series of images, is also essential. No one can deny the impact of visual media on the way people gather and process information. Curiosity is also a valuable personal attribute for professional communicators. The communicator must first have an interest in what is happening in the company and to its people and customers to be able to communicate that interest to others. Without interest, the writer’s message is bland at best, at worst phony and hollow. Active listening is essential to effective communication and builds a relationship of trust. Communicators understand the need for this fundamental business practice: listen to your customers and employees. Consideration of the ideas of others places value on them and on your relationship with them. Understanding advocacy communications is also essential to corporate communicators. A company spokesperson may be called upon to put aside personal opinion in favor of a company position. Because of this fact of corporate life, the ideal corporate communicator is one who has been with the organization a long time. Negotiation skills are also essential for advocacy communications. Contemporary management experts identify at least six patterns of interaction in negotiations: win/win, win/lose, lose/lose, lose/win, win, or no deal. Even though contemporary business is highly competitive, it is also extremely cooperative and interdependent. Win/win thinking seeks benefit in interactions and selects agreements or solutions to problems that are mutually beneficial and satisfying. The contemporary business environment is one in which today’s competitor is tomorrow’s partner. Win/win builds an environment of trust since the solution is not your way or my way, but a better way. Integrity is extremely valuable for any organization, and any corporate spokesperson should instill trust in the audience. Without trust, the message is unlikely to have the desired impact or much positive impact at all. Integrity and trust is built over time through attention to detail, consistency in message, follow-through on promises. It is reinforced in face-to-face contact with customers and employees through body language and eye contact, as well as words. Integrity and trust are built with every act and every message of an organization. Groups and Presentations Corporations and organizations function through groups and as collections of groups. Note the language: management team, quality circle, quality action team, management committee, board of directors, product management group, and crisis committee. Whether your organization emphasizes old-style hierarchical leadership techniques, what has been called Theory X, or more contemporary consensus management styles, Theory Y or Z, the ability to work effectively in and with groups is an essential element in a broader definition of corporate commu-
nication. The reengineering and quality programs are built upon a foundation of shared commitment to corporate goals. Most communication at work occurs in small groups. People give numerous presentations related to actions and projects. Companies and industries have their own particular presentation style. In engineering and high tech firms, the presentations are straightforward and factual. Engineers prefer an analytical presentation of the facts. Visuals are overhead projections or slides in formal situations. Management presentations, on the other hand, are brief and direct with the use of slides and video. More effort, however, is spent on the form of the presentation than would be for an audience of technical experts. Managers expect a presentation of the options, alternatives, and solutions, rather than an analysis alone, because they need to see the results of an analysis. Decision makers, then, expect a polished presentation, not a slick one. Increasingly, meetings are on interactive video networks, computer networks, and by e-mail. Meetings occur through computers, changing familiar patterns of eye contact, facial expression, and body language in face-to-face communication. Like the telephone, computer-mediated communication calls for a new etiquette of human interaction. As these customs and rules develop, the corporate communicators will be in the vanguard of the change. Selecting Media Corporate communications require that professionals determine the best media for both the message and the audience. High technology digital multimedia to low technology posters in the company lobby or a new company logo to a ‘‘dressdown’’ day for employees are possible media for corporate messages. Selecting the right medium for the message plays a central role in the success of the communication (see Table 1). The corporate communication professional selects media, keeping in mind the message, the desired effect on the audience, and the corporate environment. Cost is always a factor because time, talent, and money are limited and budgeted. KEY CORPORATE COMMUNICATION FUNCTIONS LISTED ALPHABETICALLY Advertising and Company Image Building Corporate advertising, as distinguished from product advertising, creates a positive image of the corporation. It presents a general image of the company or presents an issue with which the company wants to be associated. It can feature issue advocacy and present views on social concerns, such as the environment, recycling, conservation, and world hunger. It can be found in special sections magazines, on the Sunday morning news analysis shows, on Public TV documentaries, and in sponsorship of art exhibitions, sports, and concerts. According to the Association of National Advertisers annual survey Corporate Advertising Practices, the intended audiences for corporate advertising continue to be customers first, then the trade, employees, Wall Street, and Washington. Whether marketing-driven or image-driven, organizations increasingly participate in Public Television as part of their corporate advertising and corporate identity actions. Producers and Public TV station owners consider their viewer as a
CORPORATE AND ORGANIZATIONAL COMMUNICATION
345
Table 1. Selecting Appropriate Media Medium TV network video
Radio Film Print
Computer network; e-mail; electronic bulletin boards and ‘‘home pages’’ Displays; posters; bulletin boards
Application Company annual meetings; motivational messages; news conferences; announcements; training Company annual meetings; motivational messages; announcements; training Company annual meetings; motivational messages; company history; training Company annual reports; Newsletters; magazines; announcements; policies; reference documents Time-critical messages; proprietary technical information; routine memos and actions items; reference material; policies Motivational messages; seasonal announcements; safety and quality messages
citizen, rather than a market. It is free to the American citizens, unlike cable-based services, and costs roughly $1 per person per year in tax money. And the audience is approximately 100 million per week. The impact of Public TV has created an awareness that the marketplace is not always the answer. The focus of Public TV is on quality programming, which is expensive. And it has done fine programming with limited resources, considering that the yearly outlay of the entire public broadcasting industry (approximately $1.5 billion) is less than Fox paid to get NFL Football from CBS. Also consider that Public TV has enjoyed broad bipartisan support and is also heavily involved in local and national classroom learning and classroom activity. Changes in media and technology and the number of channels and other media sources available to people through the Internet make the participation in a proven quality medium such as Public Television less of a risk. These are some ways corporations participate in Public TV and benefit: • underwriting programming; books tied to programming establish positive public image; indicate participation on letterheads • sponsoring local programming builds equity in the community • funding equipment in schools provides strong community link • hosting receptions to influence law-makers • sponsoring courses; training; teacher training; literacy; and math associates company with the subject In terms of traditional advertising, the answer is not clear whether Public TV is worth it. But if participation is seen as a form of corporate advertising, to build image, then support of Public TV is valuable. Communicating Change—Reengineering, Quality, Corporate Culture Programs Corporations are changing, reinventing, rethinking, transforming, and reengineering themselves. And with change comes chaos, uncertainty, and renewal. For everyone in-
Impact
Cost
High
High
Moderate
Moderate
Moderate
High
Low
Low
Moderate
High
Low
Low
volved, change represents a threat to security or an opportunity to move forward. New forces are at work in changing corporations. New Sophistication in Customers or Audience. The force of the customer is felt everywhere from consumer electronics to the use of new management tools, such as integrated product development (IPD) in traditionally conservative, hierarchical organizations. Customers at all levels demand quality products and are hungry for information about the products they want. They are also looking for stimulation and entertainment, which has profound implications for such fields as software interfaces and the development of the information superhighway. New Media Technologies. The number of communication channels available is increasing: e-mail, fax, voice mail, desktop publishing, personalized magazines and journals, networking software and groupware, and the World Wide Web. Because there are more tools and more choices, consumers need more information than ever before. Moore (5) explains the complexity of the forces at work in taking innovations to market. He describes five categories of people in a technology adoption cycle: 1. Innovators—pursue new technology products; seek them out before marketing begins; place technology at the center of their lives; take pleasure in exploring new technology for its own sake; make up a small, but influential minority. 2. Early adapters—buy new products early in the life cycle, but not technologists; easily imagine, understand, and appreciate the benefits of new technology; relate potential benefits to their own goals; make their buying decisions on their own intuition rather than on another’s recommendation. 3. Early majority—share some of the early adapter’s appreciation of technology, but are driven by practicality; wait to see how others do before they buy-in because they know how fads work; require well-established references before they invest; make up about one-third of the adoption-cycle and are critical to the success of any product.
346
CORPORATE AND ORGANIZATIONAL COMMUNICATION
4. Late majority—shares all the concerns of the early majority and more; possesses no comfort with the ability to handle a technological product; waits until a standard is established; requires lots of support from a large, well-established company. 5. Laggards—want nothing to do with technology for many reasons, personal and economic; buy technological products buried deep in another product so it is invisible to them. More Widespread Ethical Environment. Because the tools of our technological age have enormous social and economic impact, the ethics of the workplace must be considered. No longer can a corporation make a product and not worry or care about its impact on the community. Companies now function as corporate citizens. New methods of regulation and new laws underscore the responsibility customers expect of providers. Stronger Economic Factors. Competition has been the strongest economic factor for change in corporations. It has forced the quest for quality and efficiency as coequal goals in a company’s strategy. It has also forced the rapid growth in globalism. New Strategic Alliances. Ventures, partnerships, reorganizations, mergers, acquisitions, buyouts, reengineering, downsizing, rightsizing—more than buzz words, these are the codes for a workplace in upheaval. Almost every organization has undergone or is undergoing a profound change in structure or ownership. If managed well and communicated clearly, the new alliances signal a different way of thinking about work in general and about the workplace itself. The change process emphasizes rethinking or reengineering of management practices from hierarchical, authoritarian relationships among managers and employees to a consensus approach. The focus is on teams empowered to identify and solve problems and implement solutions. Communication and a new customer orientation are the cornerstones of the change in both company attitudes and practices, requiring corporations to make massive changes in the way people communicate within the organization and with those outside. Organizations and People Suitable for Technical Innovation. An individual who sees the challenge of widespread changes in work processes and outcomes is best suited for technical innovation and change. This person comes to work smiling, often arriving early and leaving late. No matter how much chaos the organization is in, this person appears to respond well to the situation. Others in the organization respond less well to change and exhibit dysfunctional behavior. There are degrees of dysfunctional behavior related to change. For instance, examples of a low degree of dysfunction are poor communication, reduced trust, blaming, defensiveness, increased conflict with fellow workers, decreased team effectiveness, and inappropriate outbursts at the office. Moderate dysfunction involves lying or deception, chronic lateness or absenteeism, and symptoms such as headaches and stomach pains, apathy, and interpersonal withdrawal. A high degree of dysfunction involves covert undermining of leadership, overt blocking, actively promoting a negative attitude in others, sabotage, substance abuse, physical or psychological breakdown, family abuse, violence, murder, and suicide.
The person who responds well to change exhibits buoyancy, elasticity, resilience—the ability to recover quickly from change. Such people possess a strong, positive sense of self which provides them with the security and confidence to meet new challenges, even if they do not have all the answers. They are focused on a clear vision of what they wish to accomplish, and they are tenacious in making the vision a reality. In addition, these people are accommodating and flexible in the face of uncertainty and organized in the way they develop an approach for managing ambiguity. They engage the circumstances, rather than defend against change. Such a person practices fairness, integrity, honesty, human dignity—the principles that provide the security to adapt to change. Understanding and managing expectations helps an individual or an organization through the change cycle. Rather then lower expectations, manage them. In doing so, consider that in responding to positive change most people go through these phases: 1. uninformed optimism or certainty at the start; 2. informed pessimism or doubt—people may quit publicly, or more destructively they quit privately and continue to work, allowing the negative feelings to generate dysfunctional behavior; 3. hope emerges with a sense of reality; 4. informed optimism results in confidence; 5. satisfaction closes the cycle of change. The good news is that the cycle is predictable and can be used to manage expectations by helping people prepare for the rough periods. The bad news is that most people feel they are exceptions and they do not follow the cycle from beginning to completion. People neglect to consider that change carries an equal opportunity for failure. The Language of Change. Often people react to new situations without fully realizing their true feelings, nor can they articulate them. The metaphors they use reveal and shape their understanding of events. The metaphors of change are roughly aligned with four types of organizational change: maintenance, developmental, transitional, and transformational. In maintenance, change is equated with something broken or poorly maintained. Change means that something is wrong and needs to be fixed. The metaphor provokes a fix and maintain image represented by agents, such as a mechanic, maintenance worker, or repairperson. In developmental, change builds on the past and leads to better performance over time. In this environment teamwork is the key to build and develop. The agent is often called trainer, coach, mentor, facilitator, or developer. You might hear metaphors borrowed from sports, There is no I in TEAM. Transitional change involves a move from one condition to another. For instance an operation goes from manual to automated. The image is often one of movement and relocation, and the agents are often called planners, guides, or explorers. In such environments you might need to create a map for unexplored territory. And transformational change implies the transfiguration from one state of being to a fundamentally different one. An
CORPORATE AND ORGANIZATIONAL COMMUNICATION
example might be a business or industry that changes from a regulated monopoly to a market-driven competitive business. The image is one of liberation, and the agents are called visionary, creator, and liberator. Understanding and using the language of change benefits everyone involved and helps them perceive of change as an opportunity to move forward, rather than as a threat to their well-being. Communicating with the Media Creating good media relations requires constant effort and attention and a mature corporate attitude toward the public and the media. The contemporary business environment is awash with media—newspapers, magazines, professional and industrial journals, T.V., business radio, multimedia, and the World Wide Web. Corporations spend millions on marketing and advertising so that their message can reach their current and potential customers. If the press sees a corporation’s product or service as news, then it writes or broadcasts a story. Many organizations measure the media coverage in terms of the equivalent cost of advertising. Coverage is the goal of any media relations plan. Good relations with the press results when the reporter checks with the corporation to validate statements and facts. Their contact offers an opportunity to set the record straight or put the facts into a clearer, more objective context. Rumors and inaccuracies are corrected. Meeting the Press: Some Guidelines. The following suggested actions offer guidelines for meeting the press most often cited by scholars and practitioners. Be Prepared. In an information society such as ours, having accurate data and timely statistics is expected. Not only are you giving your valuable time to discuss issues and events with the press, but their time is valuable also. So, do your homework and prepare wisely for a press interview. Make Your Points. Have three main points you wish to get across. Just as you would in an executive summary of a report or in a marketing communication, identify clearly the main ideas that make up the message you want communicated. Be Concise—But Avoid Yes, No. Show awareness of the space and time limitations of the media by presenting positions clearly and concisely. Although brevity is a virtue, the press also looks for interest. Yes and no answers make the story difficult to write and uninteresting for TV or radio. Get Comfortable. Remember that movements and eye contact communicate nonverbally. When meeting face-to-face with the media, prepare for the discussion by making sure that you are not interrupted. A conference room set aside for outside guests is a good idea. Tell the Truth. Building credibility with the media begins with their perception of you as a source of accurate and truthful information. Integrity is a valuable attribute. People react positively to people they perceive to be genuine. Being yourself is linked with telling the truth and is part of building corporate integrity. Use the Printed Word. Prepare for press encounters with a printed statement or press release. The document helps reporters get facts straight—figures, statistics, the spelling and titles of people mentioned. Remember that the reporter’s job
347
is to report the facts, and getting accurate information includes often complex and detailed data. Keep Your Composure. The media must attract readers and viewers to sell advertising. Such pressure translates into the search for unusual or controversial angles. Journalists call this the hook, the means to capture the audience’s attention. Offbeat, even offensive questions are a common tactic to elicit an emotional reaction that makes a good headline. So be cool under pressure. Think of the Reader or Viewer. Remember the importance of the audience in any communication. Consider how remarks would appear on the front page of The New York Times, The Wall Street Journal, the local TV news, or on the national news. Say You Don’t Know. When asked a question that stumps you or requires information or data you do not have at hand, say you don’t know. Follow up immediately with plans to get the information and an offer to contact the reporter later, preferably before the deadline. Hypothetical Questions, Third-Hand Information. Reporters may ask questions that lead to speculation. Such questions are particularly common when corporate officers are asked to comment on possible mergers, anticipated layoffs, or restructuring. If reporters cannot identify the source of their information, politely decline to discuss rumor and hearsay because of your company press policy. Sensitivity to Deadlines. The daily production of newspapers and TV or radio broadcasts places strenuous demands on reporters to file their stories on time. It is a common courtesy to ask at the beginning of an interview when the reporter’s deadline is. Accessibility. Give reporters a contact number, an e-mail address, and a fax machine number to indicate that you are available for follow-up questions as the story is being written, and later as a source for other stories. Forget ‘‘Off the Record’’. If you don’t want something to appear in print or to be broadcast, then don’t say it. ‘‘No Comment’’. Finally, the press universally interprets the response ‘‘no comment’’ as a ploy to hide something. Say clearly that the company does not discuss proprietary issues or matters that have personal impact on employees. Develop a Media Strategy. A media strategy is important locally and globally. The process follows a four-step problemsolving model: 1. Define the Problem. Write a problem statement, and analyze the situation. The analysis requires gathering, processing, and interpreting information. Listening and observing are fundamental methods. Interpreting information helps confirm the problem statement or restates it in a new light. The analysis should lead to planning. 2. Plan. Articulate goals and objectives, and develop a program of actions to achieve them. Identify the audiences or publics, the goals for each, and the message and media strategies to meet the goals. Budget time and other resources that must be committed to the program. Planning also involves evaluating the performance of the program. 3. Implement Plans and Communicate Messages. The fundamentals of the communication process offer the key
348
CORPORATE AND ORGANIZATIONAL COMMUNICATION
to successful implementation. Understanding the corporate goals and objectives, fitting them to the audience needs and expectations, and being mindful of the context in which the communication occurs applies. The goal is to change the thinking and behavior of the audience. 4. Evaluate. Evaluating the effectiveness of the program can vary from the number of column inches or the number of minutes on the air the effort generated to the increased awareness of the issues measured in the target audience; to changes in attitudes, opinions, or behaviors; and to evidence of economic, social, or political change. The criteria and evaluation methods must be determined as the program is planned and evolves. Using Media Broadcast News. Gone is the era in which national networks dominated TV screens. Major markets have a dozen or more stations. The number goes well above fifty, counting cable. Nevertheless, national network news still plays a major role in bringing the business news to most people. Creating good relationships with the national networks and their local affiliates is fundamental to an effective corporate communication strategy. Corporate use of the national networks is related to building corporate identity and corporate image. Radio. Radio in the age of television has become an ‘‘outlaw’’ medium. It is very personal and focused on niche markets. Certain products and services can be discussed and advertised on the radio with relative ease. Radio programs appeal to smaller and smaller audiences with ever increasing diversity. Advertising and sponsorship of shows is a bargain compared to television network rates. Radio is also used to inform employees of local events or plant closings. Cable. Like radio, cable television has so many stations that its audiences are smaller and more diverse. Advertising for local businesses makes economic sense in these markets. Business-related programs make up much more of the programming on these stations. Video and Satellite News Releases. Traditional communication with newspapers called for the circulation of a written press release usually sent by mail or fax. The company held a press conference for important or particularly newsworthy information. Many organizations use video and satellite technology to provide information about the company and its products and services to local news stations, the national networks, and cable companies. In effect, the corporation prepares a video news story that the outlet can run in its entirety or use clips in developing its own story. The practice has benefits for both the company and the news organization. It allows the company to provide detailed information in a visual format that it controls. Often the company provides more dramatic footage than the station has available or could prepare on a daily deadline schedule or budget—pictures of aircraft in flight, new equipment undergoing tests, or computer simulations of planned buildings within the community. Corporate Video. Corporations use video internally to provide information to staff and employees. Large companies have a television network that broadcasts company news daily to major sites around the world. TV monitors are located in high traffic areas, by elevators, or near the entrance to the cafeteria. The screen may have a scrolling message. In addi-
tion to this video bulletin board, the monitor may offer short pieces from company officers, plant employees, community leaders. The use of corporate video also involves employee orientation and training. Corporate video is also used to provide important information to the community; to present product information to customers; and to present financial information in a video annual report. Interactive Video. The combination of computers and video has opened the corporation to new media technologies—voice mail, e-mail, and local area networks. Such systems are in use in hospitals, factories, hotels, and offices. They allow users to select information from a menu using a light pen, mouse, or keypad. The Financial Press. Experts and practitioners agree that good corporate communications improve the company’s financial position, keep managers out of trouble with the Securities Exchange Commission (SEC), and help protect against unfriendly takeover attempts. Companies must communicate their financial expectations and long term outlook. Investors and analysts need such information in making investment decisions for themselves and their clients. Annual reports, 10K and 10Q reports, and quarterly reports are among the documents required by law. These fundamental documents can be used to communicate the corporation’s vision for the future while providing detailed financial analysis for stockholders, regulators, and the financial press. The legal policy of materiality, evolved through regulatory changes and decisions, requires that publicly held companies disclose all information with an impact on the organization’s profit or financial position. Such disclosures are done through information wire services—Dow Jones and Reuters. Communication with individual investors and with institutional investors is also done through the business and financial press. Hometown Media. Local newspapers, radio, and TV stations cover the local angle of corporations. If an organization has a substantial presence in the community, any change in the size of the national workforce or global strategy makes the front page on Main Street. With this in mind, make sure that the corporation’s representatives in the community and those at the home office have had substantial contact with one another before information is released. Such coordination ensures consistency in the information given to reporters and shows careful planning. National Newspapers. Only a handful of newspapers claim reporting news of the nation and the world as their mission. The New York Times, The Washington Post, and The Los Angeles Times consider themselves newspapers of record. They report events of national and international importance and make information available to the world through wire services. The Wall Street Journal could easily be included in the general news category because business is involved in everything, and their coverage of events is certainly global in scope. These few national newspapers have enormous impact on public opinion and attitudes. The stories they run set the agenda for other media, particularly television news programs, and cable network programming. Corporations scan the media regularly for stories and information. They create a briefing book for their officers, alerting them to news related to the corporation, its core business strategy, its products, or the industry as a whole. On-line technologies allow individuals within corporations to program software to call up articles in fields in which they are interested.
CORPORATE AND ORGANIZATIONAL COMMUNICATION
Trade Publications. Trade and industry specific newspapers and periodicals offer businesses a medium specific to their concerns. These publications are mainly closed-circulation periodicals, that is, the people who receive the publication are involved in the industry. The free subscription periodicals are regarded as promotional vehicles for the industry or trade, giving advertisers a controlled audience. The paid publications clearly separate editorial content from advertising and are seen as more objective with more reliable information. Professional Journals. Professional journals have long been the main vehicle for reporting research findings. Their existence and value depend on objective editorial policies. Most accept no advertising to avoid bias. Some, however, accept ads from professional organizations and groups for conferences, books, and other publications. The best way to develop relationships with such publications is through support of professional organizations which in turn support the publications. Company accountants, lawyers, writers, engineers, and scientists can identify the proper journals. Entertainment Media. Social events related to charity functions are often the only time a corporation appears in the entertainment media. Corporations are mentioned if they sponsor events—Opera, Philharmonic, Shakespeare; the Super Bowl; or the Olympics. Communication with Stockholders and Investors Building relationships with the investment community, a central function of corporate communications, demands clear, honest interaction. Not only is it good practice, but laws and regulations require full and fair disclosure of material information to the marketplace. The goal of this flow of information is to give analysts and investors the best information possible so they can fairly assess the value of your company. The information allows informed decisions about the strengths and prospects of companies. Communication builds a company’s mutually beneficial relationships with investors, analysts, and stockbrokers. Information about a company that is candid, complete, timely, and honest is the foundation of a strong positive relationship. Companies help investors develop realistic expectations by providing accurate information for analyzing results and making forecasts. The relationship keeps big surprises from occurring. Investors and analysts prefer companies that have predictable performance and provide reliable information. Once communication establishes the company’s credibility, investor confidence in management grows. A strong relationship with investors can help in flat or down quarters. Investors accept explanations, are more patient, and may be more inclined to hold the stock. Communication methods and vehicles for investor relations are: • printed matter—company prospectus, annual reports, quarterly reports, 10K and 10Q reports, press releases, fact books, corporate background or overview statements, Securities Exchange Commission filings, the proxy statement • oral presentations—annual meetings, briefings, conference calls, telephone contacts, audio tape reports • electronic means—e-mail, broadcast fax, videotape reports, on-line information services and databases, the Internet
349
Communication in International Environments ‘‘Act local, think global’’ has become a familiar business mantra. The simplicity of the phrase can lure the unsuspecting into a simple-minded interpretation. Much has been written on the need to compete in global markets. The reality is that doing business in another country is complex and difficult. For a start, familiarity with the history, the politics, the alliances and treaties, and the art and literature of a country. An effective approach to learning about the transnational environment also includes an understanding of language, technology and the environment, social organization, contexts and face-saving, concepts of authority, body language and non-verbal communication, and concepts of time (6). Language. Doing business successfully demands attention to cultural, social, political, and religious practices, in addition to technical, business, legal, and financial activities. Communication is the key to each. Real communication—not cookbook do’s and don’ts. The first step is to make every effort to learn the language. Almost all people notice your effort to learn their language. This is more than just symbolic. Language encodes culture, and making an attempt to understand the words leads to trying to understand the way people think. Learning the language helps to understand the way the people who speak it view their world. In addition to its power to convey information and ideas, language is also the vehicle for communicating values, beliefs, and culture. Technology and the Environment. The way people view technology and their environment is culturally defined and has an impact on international business communication. The way people view man-made work environments differs in the perception of lighting, roominess, air temperature and humidity, access to electricity, telephones, and computers. People perceive their relationship to the physical environment differently. For some, nature is to be controlled, for others, it is neutral or negative, and for others, it is something for man to be in harmony with. Even climate, topography, and population density have an impact on the way people perceive of themselves, and that has an impact on the way they communicate, their concepts of mobility, and the way they carry on business. Western managers expect a clean and relatively quiet office, one with dependable lights, telephones, copiers, networked computers and e-mail, and temperature control. However, many countries ration essential services, such as electricity. Transportation and housing may not meet Western standards. And the natural environment may be much hotter, colder, more humid, or dryer than anticipated. Daylight in northern countries may be limited in winter, and almost endless in summer. Heat and rain may change the daily routine, particularly in the tropics. Be prepared to adapt. Social Organization. Social organization, or the influence of shared actions and institutions on the behavior of the individual, has a strong impact on business communication. Institutions and structures reinforce social values—the consensus of a group of people that a certain behavior has value. Familiarity with the major works of art and literature opens a window
350
CORPORATE AND ORGANIZATIONAL COMMUNICATION
to the social organization of the country. These social structures influence business: • • • • • • • • •
family relationships educational systems and ties to business class and economic distinctions religious, political, and legal systems professional organizations and unions gender stereotypes and roles emphasis on the group or the individual concepts of distance and attachment to the land recreational activity
Contexts and Face-Saving. Contexts and face-saving refer to the way one communicates and the situation in which the communication occurs. Cultures are high-context, like the Japanese, and low-context like the German. In a high-context culture like the British, details about class and education and even the place of birth are apparent in one’s clothes and accent. On the other hand, low-context cultures require almost photographic detail for clear meaning. People all over the world seek to preserve their outward dignity or prestige by face-saving. Cultures, however, differ in the emphasis on it. High face-saving cultures have these general characteristics: • high contexting • indirect strategy for business communication • toleration of a high degree of generality, ambiguity, and vagueness • indirect communication considered polite, civil, honest, considerate • direct communication considered offensive, uncivilized, inconsiderate • few words used to disclose personal information Low face-saving cultures have these general characteristics: • low contexting • a direct strategy for business communication; confrontational • very low tolerance for generality, ambiguity, and vagueness • indirect communication considered impolite, unproductive, dishonest, inconsiderate • direct communication considered professional, honest, considerate • written and spoken words used to disclose personal information Saving face is allied with concepts of guilt and shame. Shame is associated with high-context cultures; guilt with low. Lowcontext cultures value rules and the law. Breaking the law or a rule implies a transgression—sin-and-guilt—as a mechanism for control. High-context cultures use shame as the agent for controlling behavior through face, honor, dignity, and obligation.
Concepts of Authority. The concepts of authority, influence, power, and how power is exercised in the workplace differ from culture to culture. In Western cultures power is the ability to make and act on decisions, an abstract ideal discussed and debated by philosophers and theorists. In Asian cultures, power and authority are almost the opposite of Western concepts. Power results from social order. Asians accept decision making by consensus and decide to be part of the group rather than the leader. Understanding the concept of power helps shape a business communication strategy. The direct approach to communication, so effective in the West, may prove crude and offensive elsewhere. Body Language, Nonverbal Communication. Body language and nonverbal communication are just as important in international and cross-cultural communications as they are in communications within a homogeneous culture. Important elements in international communication are kinesics (body movements); physical appearance and dress; eye contact; touching; proxemics (the space between people); paralanguage (sounds and gestures used in place of words); colors; numbers; alphabets; symbols (such as a national flag); and smell. Concepts of Time. Concepts of time differ from culture to culture. Physicists, such as Albert Einstein and Steven Hawking, have demonstrated that time is relative. For purposes of communication across cultures, it also helps to consider time as a social variable. Time is defined culturally and by shared social experience. Corporate Policy: Vision, Mission Statements, and Corporate Philosophies The written mission statement defines the corporation, its goals and operating principles, and its values and beliefs. The first of these three parts is straightforward and brief. The presentation of goals and operating principles calls for more detail. The expression of a company’s values and beliefs is difficult because people associate values and beliefs with philosophical or religious activities, not commercial ones. These statements cover a company’s commitment to the following: • • • • • • • • •
quality and excellence customer satisfaction stockholder return on investment profits and growth employee relations competition and competitiveness relationships with vendors ethical behavior community relationships and corporate citizenship
and recently, • diversity in the workplace • preservation of the environment and resources A corporate code of conduct, ethics policy guideline, or handbook of business practice expands the company mission state-
CORPORATE AND ORGANIZATIONAL COMMUNICATION
ment. The written code acts as an implementation guide, and may include the following: • policy regarding general business conduct; disclosure; compliance • workings of the corporate business ethics committee • compliance with laws: securities—insider information; financial inquiries disclosure of company information political contributions relationships with government officials (domestic and foreign) commercial bribery—kickbacks, gifts record keeping antitrust—Sherman and Clayton Acts mergers and acquisitions international operations • bidding, negotiation, and performance of government contracts • conflict of interest • equal opportunity • working conditions • the environment Correspondence and Communication Technologies Knowledge is power. Electronic media offered the productivity and communication tools to organizations to usher in the information age. It was simultaneously a lever to flatten hierarchical organizations and to provide the means for an empowered and informed workforce. However, ‘‘The paradise of shared knowledge and a more egalitarian working environment just isn’t happening. Knowledge isn’t really shared because management doesn’t want to share authority and power (7).’’ Are these the signs of a failed revolution, or are they more likely the end of a cycle in which the organization and the individual continue the struggle for dominance? Since the 1920s and 1930s and through the depression, organizations worked toward realizing a human relations model, described by Elton Mayo and expanded by Abraham Maslow. These theorists articulated the twentieth-century conflict between the needs of the individual and the needs of the organization. Then as now, this conflict remains the irreconcilable force of the industrial revolution, the postindustrial revolution, and of the information age. Our electronic communication tools highlight the paradox. A single person can influence the course of large organizations, such as case of Intel’s troubled introduction of the Pentium chip in the winter of 1994– 95. Such David and Goliath tales of organizational life make headlines. More often than not, though, it is the organization that still wields such power and influence that most contemporary Davids are overwhelmed almost effortlessly. Today David can be downsized, restructured, press-released, or budget-cut into submission. Or David can be worked into submission, his support staff replaced by productivity software, groupware, and Internet access. Logos, Letterhead, Annual Reports. Logos of organizations are familiar by design. If not recognized instantly, the logo
351
has done a poor job of graphic communication. Even if it is a design of the letters of the company name, the artwork reinforces the visual stimulus. The logo and the company colors build corporate image by giving a nonverbal message that reinforces the company image in the mind of the viewer. Shape, use, color, and placement are centrally controlled to build and maintain a corporate identity. Even in an age of decentralized management, the central control of corporate graphic images is essential in building and keeping a corporate image. The annual report is the primary publication given freely to introduce the company to the outside world. It provides information on the company’s progress and accomplishments to the investment community, stockholders, employees, and the general public. An indirect but essential goal of the annual report and one other way to justify its expensive production is its role in the perpetuating the image and identity of the organization. Copies of the report go to all registered stockholders and also to Wall Street analysts, the business press, students, libraries, vendors, trade associations, and professional groups. The report is often a requirement in new business proposals to clients and the government and is frequently used for employee recruiting. Given all of these uses, every element of the annual report is designed to contribute to the positive image of the company: • • • • • • •
artful covers, excellent photography CEO’s letter summary of accomplishments discussion of plans for the coming year auditors statement and balance sheet ten-year comparison of financial highlights footnotes to satisfy Securities Exchange Commission regulations
Voice Mail, E-Mail, LAN, The Internet, World Wide Web. Email links computers to send messages from one computer to another. The systems have global reach through various networks, such as the Internet and commercial providers. These electronic communication channels save time and distribute costs. Messages can be posted on a general bulletin board to which anyone on the system has access or sent to a distribution list or to one person on the network. In many organizations e-mail has replaced the use of and the need for informational memos. Using e-mail to replace paper memos and physical distribution of those documents has substantially accelerated communication within organizations. Local area networks (LAN), function similarly to e-mail, but they consist of several computers in a particular location linked to form a network that allows the users to share data and programs. LANs have the outward appearance of a centralized computer, but the system functions more like a bundle of cells. The Internet and the World Wide Web came into wide commercial use after 1994. Companies have built Web pages for fear of being left out of this technological revolution. The Internet may fulfill the predictions as interfaces improve, the infrastructure gets better, security becomes tighter, bandwidth becomes higher, and full motion video is added. Skeptics see the open architecture of the Internet and the World Wide Web as its reason for success, and its commercial weakness. Designed by the Defense Advanced Research Projects
352
CORPORATE AND ORGANIZATIONAL COMMUNICATION
Agency (DARPA) to withstand the catastrophic damage of a nuclear holocaust, its strength is in its open nature. So security of proprietary company data cannot be protected with high confidence in this environment. Nevertheless, almost every company has an Internet address, up from fewer than half in 1995. Impact of Electronic Media. As anthropologists know, tools are the artifacts of a culture. Add a new tool to an existing culture, and it changes that culture. Our media technologies now allow us to communicate anytime, anywhere. The impact of a global, 24-hour workday has profound implications on our lives (8) and on how communication technology influences our society. Gates (9) and Negroponte (10) to the contrary, the change is not always positive. New media technologies—the tools of communication—continue to have a profound impact on corporate communication. By their nature innovations are tools which may or may not require us to change our behavior. Technologies that require us to change, Moore (5) calls discontinuous innovation, and these tools are not userfriendly. As such they are often doomed to failure or require several iterations to attract acceptance. The issues that face communications professionals focus on human interaction both with machines and with other people. The use of contemporary communication technologies in an environment of accelerating change, political uncertainty, economic stress, and uncertain corporate direction places new demands on the communication professional. No longer is mere superior talent with the written word sufficient. Understanding the ethical conflict of individual rights and corporate goals is necessity for survival. With increased emphasis on team action and the proliferation of empowerment programs through TQM and reengineering, the need to work effectively with others, rather than in isolation, is also a fact of corporate life for communication professionals. Information technologies have changed the way we create, archive, access, and distribute information. New technologies have made the access to and use of information more egalitarian and less proprietary. Gatekeepers have been eliminated and new classes have emerged, the information haves and the information have nots. Some experts, Zonis (11), see a global destabilization as a result of these technical advances, a rapid breakdown in the power structure of business, the family, and political organizations. However, the equalizing power of information has flattened the hierarchical nature of organizations. With hierarchies disappearing, egalitarian and collaborative structures are emerging. The workplace, the nature of work, and the fabric of our society entered a rapid and unrelenting change cycle. Once the relationship between individuals and the institutions of our culture could be relied upon. A bond existed that engendered trust and loyalty. Contemporary executives, however, find that these are rare commodities indeed. Crisis Management and Communication This section discusses stages of a crisis, planning for and managing a crisis, and responding during the crisis and after. The remainder covers crisis preparation through issues management, particularly environmental issues. Stage of a Crisis. Many people who write about crises and management use a medical analogy. In the First World War medics developed triage for rendering aid on the battlefield:
• not seriously wounded could be treated and released • those near death for whom no amount of effort could make a difference, and • those who would most likely recover if something were done immediately The triage model applies to communications and actions during the crisis itself, when time to contemplate and analyze is all but nonexistent. The triage approach fits the management principle of applying limited resources for their greatest impact. Another model identifies the development of disease through stages: (1) prodromal, (2) acute, (3) chronic, and (4) resolution to a normal state. In practice you can refer to (1) the precrisis stage, (2) the clear signs of a crisis, (3) the persistent reemergence of the crisis, and (4) the resolution. Airlines, utilities, computer operations, and hospitals plan for the unthinkable because they have learned from painful experience that the unthinkable has a nasty habit of happening. Crisis Communication Plans. In the past what happened in a business was literally no one else’s business. Corporations cut off questions with a curt, ‘‘no comment.’’ Such closed door policies create an information vacuum. With the trend toward sensationalism, many reporters will do just that, often in ways damaging to the organization. Employees also fill the information vacuum, fueling the rumor mill within an organization. Combine one disgruntled employee and one ruthless reporter, and the result can be a major headache for the company, which becomes the catalyst in a media feeding frenzy with unpredictable outcome. Cooperation with the media and employees is a much more prudent and mature policy for any organization to take in normal times and in times of crisis. Planning for a crisis as a fact of corporate life is the first step in its resolution and a subsequent return to normal operations. No one can predict when an event will occur, only that sometime in the life of an organization a product fails, markets evaporate because of a new invention, the stock price falls, an employee is caught doing something illegal, the CEO retires, the workforce goes on strike, a natural disaster occurs, and a terrorist plants a bomb. It is perfectly normal for executives to avoid thinking about a crisis. Positive thinking is embedded in the way managers are taught to be effective. Problems are opportunities; one man’s misfortune is another’s fortune, and so it goes. Admitting that a crisis could occur is to entertain the greatest of corporate sins, failure. The tendency to ignore the worst also recognizes that people cannot control events. Being unable to control the forces of nature certainly does not mean weakness on the part of managers. It merely indicates that people must plan to deal with emergencies and their consequences. Weakness comes only when people do not prepare for events. Companies which get into trouble are often the ones which never considered that bad things would happen to them. Emergencies, disasters, bomb threats, criminal charges, executive misconduct, none of these may happen, but a well run corporation develops plans in case the unthinkable occurs. Even the best run companies can and do have difficulties. Gerald Meyers (12) identifies nine types of crises: (1) public perception, (2) sudden market shift, (3) product failure, (4)
CORPORATE AND ORGANIZATIONAL COMMUNICATION
top management succession, (5) cash flow problems, (6) industrial relations, (7) hostile takeover, (8) adverse international events, and (9) regulation and deregulation. Planning for a crisis implies that the people in the company can recognize a crisis when it occurs. People experience generally the same stages when faced with adversity or catastrophic loss: denial or isolation, anger, bargaining for time, depression and grief, and finally acceptance. An organization is no different because it consists of people. Organizations experience (1) shock, (2) a defensive retreat, (3) acknowledgment, and (4) adaptation and change. Responding to Pressure Groups: ‘‘Green’’ Issues and Crisis Preparation. Interest groups make corporate life difficult for companies through public demonstrations staged to capture media attention, through announced boycotts of products and services, through direct harassment of company executives and employees, or through terrorist acts directed at the corporation. The conflicting power of money and morality is at the heart of understanding the social fabric of contemporary business. Freedom of choice, freedom of religion, and free markets pull and tug at one another over the issues of the environment, sexual behavior and practices, and behavior that corrupts the individual and the community. These issues represent the sharp edge of social change, potentially valuable in a dynamic free society, or a grave danger to the health of the corporation. People on their own and in organized groups have been and continue to be extremely vocal on these social issues. They express their positions with their pocketbooks. Monitoring social change is the best way a company can prepare itself for new markets and for changes in existing customer attitudes. Social change is often not indicated on the traditional balance sheets, but its power is felt as changes influence the corporation and the community it serves. During the ‘‘greed’’ decade of the 1980s, the drive for profit forced many companies to rank very low the impact of their actions on customers, employees, and the greater social good. Driven by corporate counsels and a philosophy that adhered to the letter of the law rather than its spirit, many corporate leaders asked, can we do it? as opposed to, should we do it? The environmental icons and popular culture symbols of disaster, such as Chernobyl and Bhopal, are the result. Companies can have their image, and by extension their brands, tainted through a simple act of indifference. According to Ottman, ‘‘In this new marketing age, products are being evaluated not only on performance or price, but on the social responsibility of manufacturers (13).’’ Consumers now look at the long-term impact of the product on society after it is used. The concept of quality in products now incorporates their environmental impact. Customers’ needs, laws and regulations, and the reality of technology to simultaneously create new solutions to clean up the mess and also to cause a new mess are the forces driving companies to include a ‘‘green position’’ in their marketing, advertising, and corporate communications. To the pressure from consumers, add the force of law and regulation, a global marketplace, and the ability of engineers using the latest technology to design products that are ‘‘green.’’ In the future, products will be fitted with a ‘‘green port,’’ an electronic memory that a technician can access
353
quickly to find the materials and components in the product, its service records, and to perform a quality check. All this to determine if the appliance can be reused, repaired and resold, or dismantled and some parts reused. In Europe, the lack of landfill space and increased waste from electronics and household appliances are driving regulations for recycling refrigerators, cars, personal computers, fax machines, TVs, and stereo equipment. In response to these forces Germany, Japan, Austria, Italy, Switzerland, France, and The Netherlands have begun to articulate guidelines for producer responsibility. Their efforts indicate a new era of environmental accountability. Companies who wish to do business in these countries have to meet an array of regulations and to comply with ISO 9000 standards, or they cannot sell their products and services in the European Union. Ethics and Corporate Citizenship Over a quarter century ago in The New York Times, Milton Friedman called corporate giving the equivalent of theft, ‘‘spending someone else’s money’’ to solve social problems that are the province of government. He defined a manager’s moral mandate to ‘‘make as much money for the stockholders as they can within the limits of the law and ethical custom.’’ A new generation of managers views Friedman’s position as out of step with corporate citizenship (1,2). Put simply, corporate citizenship is the acceptance of the corporation’s role as a responsible and significant member of the community it is in. Add to this the changes in the nature of stockholders since the 1987 crash from individuals to institutions, such as the enormous pension funds of TIAA and the States of New York and California, and the concept of responsibility meets the profit motive in a partnership that works for Levi’s, L.L. Bean, Microsoft, and other successful companies. The message also came from over a decade of financially conservative government funds for social programs which were prime candidates in the slashing of public budgets. So if government was out of the business of solving social problems, who would? The members of the community, which has increasingly come to include organizations and businesses. No longer was it sufficient for a business only to pay taxes and stay out of the affairs of the community. Corporations, who defined themselves as good corporate citizens, overwhelmingly linked their giving programs to their business goals. Such corporate citizens supported education programs, recycling and other environmental support programs. Good corporate citizens measure and report their corporate efforts by the following means: • • • • •
mentioning the activities in their annual report publishing a public interest report featuring the activities in the company newsletter issuing press releases linking their citizenship actions to advertising and marketing themes
Companies with a long-term commitment to social responsibility are rewarded with greater name recognition, more productive employees, lower R&D costs, fewer regulatory hurdles, and stronger synergy among business units (1,2). Acting as good citizens, modern corporations have provided social services such as health care, have funded public facilities,
354
CORPORATE AND ORGANIZATIONAL COMMUNICATION
such as parks, playgrounds, recreation buildings, or have entered partnerships with the community to maintain the infrastructure of highways and bridges. For corporations with research and development ties, the corporation often demonstrates its citizenship by supporting employee membership and participation in professional, scientific, and scholarly societies and organizations. Such support includes attendance at conferences and encouragement to take leadership roles in the organizations. The workforce in America is becoming more diverse in ethnicity, race, gender, and age. The need for individuals to work in groups or teams at work has increased as a result of greater technological complexity in the nature of work itself. Even before the building of the Pyramids of Egypt large projects demanded group efforts. Technological effort in the 1990s and into the next century implies that individuals from a wide variety of backgrounds work together in groups. The quality process itself depends on groups of professionals and technicians at all levels working together to achieve the common goals of the group. Interpersonal communication skill which begins with understanding and respect for each of the people in the group is the key to successful group performance. In a corporate culture of decision making by consensus, the efficient and effective interaction of members of a group is essential for communication. Prejudice and bigotry have no place in corporate America. Executive Communication Issues Forces within organizations shape and influence the behavior of individuals in subtle, yet powerful ways. These forces, like the wind and the tides in natural environments, are often unseen and unnoticed themselves, but their effects are easily observed. These forces combine to create the culture. Corporate culture (1–3) has become a concept that, used appropriately, offers the intellectual tools for an insightful analysis of an organization’s beliefs and behavior. Used improperly, it devolves into jargon and faddism. In an anthropologist’s terms, all human groups by their nature have a culture, the system of values and beliefs shaped by the experiences of life, historical tradition, social or class position, political events, ethnicity, and religious forces. In this context, a corporation’s culture can be described, understood, nurtured, and coaxed in new directions, but rarely created, planned, or managed in the same way a company creates a product or service. Nevertheless, an organization’s culture plays a powerful role in its success and in its failure. For this reason, the discussion of a corporation’s culture offers a foundation for the understanding the group’s behavior and suggests ways of perpetuating or changing the culture. Defining a Corporation’s Culture. Deal and Kennedy popularized the term corporate culture in 1982 (3). A corporation’s culture exhibits three levels: 1. artifacts and patterns of behavior which can be observed, but whose meaning is not readily apparent; 2. values and beliefs which require an even greater level of awareness;
3. basic assumptions about human activity, human nature and human relationships, and assumptions about time, space, and reality. Level three is often intuitive, invisible, or just below the level of awareness. Examples of artifacts and behaviors abound: corporate logos, the company headquarters, annual reports, company awards dinners, the annual golf outing, and the business attire at the main office. The artifacts and behaviors can be observed. Often these are outward manifestations of what the corporation believes and values, no matter what it says its values and beliefs are. Examples of values and beliefs are articulated in a slogan or an ad campaign, such as Ford’s decades old, ‘‘Quality is Job 1,’’ or GE’s ‘‘We bring good things to life.’’ These are simple, yet effective ways to put into words what may be complex and difficult concepts. Both examples present a complex pledge from the company to its customers to create products that improve their lives. Companies which actually write values statements find the task difficult because the written presentation too often sounds like the values statement of almost any company. Cliches and platitudes can make the most honest presentation seem hollow. Basic assumptions, the third level, is even more difficult to articulate because it requires analyzing what the company says and observing what it does, then synthesizing to determine conflicting areas. One example of a fatal conflict between the projected basic assumption and what lay beneath the surface was the demise of investment houses E.F. Hutton and Drexel Burnham in the 1980s. Both companies quickly lost clients’ trust when scandals surfaced which undermined the integrity a client expects from an investment bank. Deal and Kennedy described four corporate culture subgroups to identify and understand the various types of corporations. They called them Corporate Tribes: tough guy/macho; work hard/play hard; bet-your-company; process. Table 2 provides descriptive information about the four corporate tribes. Examples are risk, feedback from the environment, rewards, people, organizational structure, and behavior. Signs of a Culture in Trouble. Can a problem with corporate culture be identified? Weak cultures have no clear values or beliefs. Members often ask for an articulation or written statement of the group’s mission. When a mission statement is available, people in the organization routinely ridicule it as a fantasy having little to do with what the company really does. Weak cultures also exhibit many beliefs. That may appear to be tolerance, but no agreement on the most important beliefs plants seeds of confusion and undermines motivated employees. Some beliefs may develop into an ingrown and exclusive subculture, and the subcultural values then preempt the company’s. Destructive and disruptive heroes are apparent in cultures in trouble. In direct conflict with the organization’s stated beliefs and values, an executive’s abusive, harassing, or uncivilized behavior may be overlooked because he or she looks great on the bottom line. Other signs includes disorganized rituals of day to day life resulting in a pervasive sense of fragmentation and inconsistency. People in the organization do not know what to expect from one day to the next. As a result, the organization develops an inward, short-term focus. Signs of such deterioration are observed in low morale, emotional outbursts, and subcultural clashes.
CORPORATE AND ORGANIZATIONAL COMMUNICATION
355
Table 2. Characteristics of Four Corporate Tribes Culture
Tough Guy/Macho
Work Hard/Play Hard
Examples
Advertising, construction, entertainment, publishing, venture capital
Consumer sales; retail stores
Risk Feedback from environment Rewards
High Quick
Low Fast (you get the order or you don’t) Short-term focus; endurance, not speed Super salespeople are the heroes
People
Short-term focus; speed, not endurance Cowboys; individuals; rule-breakers
Structure of organization
Flat for fast decisionmaking
Flat for fast decisions; forgiving of poor decisions
Behavior
Informal; temperamental behavior tolerated; stars
Team players; informal atmosphere; friendly, optimistic, humor encouraged; no prima donnas
Perpetuating Corporate Culture. If corporate culture is understood through analysis and observation and can be modified through change programs, then corporate training can be used to nurture and perpetuate a desirable culture. Several methods of how a culture perpetuates itself afford an opportunity for training: • • • • • •
preselection and hiring of new employees socialization of members removal of members who do not fit in presentation of behavior appropriate to the culture justification of behavior beyond the norm communication of cultural values and beliefs
Many corporations have a clear idea of the kind of people they wish to hire and that profile provides them a guide for recruiting. The analogy is a sports team that drafts players with certain talents and skills, but also with the ability to fit in with the other players. A corporation does the same thing. Once a person is recruited and hired, the corporation must socialize its new members through a formal orientation program, followed by less formal socialization in the first few weeks and months on the job. Some organizations go further by instituting a mentoring program to reinforce the corporate culture. Sometimes the match does not work out, so that the member who does not fit must be removed. This is usually done within an initial probationary period. The performance appraisal has become the instrument for perpetuating the corporate culture. Appropriate behavior is generally written in a formal employee handbook, a guide to ethical behavior, and a company code of conduct. These documents function as the formal presentation of the company culture. The informal code is in day-to-day activity, tradition, and company custom.
Bet-Your-Company Culture Oil, aerospace, capital goods, mining, investment banking, computer design, architectural firms; actuarial insurance High Slow (years with constant pressure) High stakes; constant pressure; long-term focus Company over individual; heroes land the big one; young managers seek a rabbi Hierarchical; slow decision making
Formal, polite; team players; no prima donnas
The Process Culture Banks, insurance, financial services, government, utilities, heavily regulated industries (pharmaceuticals) Low risk—low stakes Very slow to none Focus on how work is done; real world remote Achieving rank; V.P.’s are heroes (or survivors)
Hierarchical (many layers of management); slow decision making from the top down Protect the system; cover your ass mentality; emphasis on procedures, predictability, punctuality, orderliness
When a member of the company breaks the customs, the corporation must justify this apparent deviation from acceptable behavior. Perpetuating the culture is vital to survival. Of the hundreds of automobile makers in America just seventy years ago, only three major ones remain. Because chance and luck can happen to anyone, the survivors must have developed a culture that evolved with the changes in the market and technology. Leadership Communication Leadership implies elites and privileged classes, and may call up visions of military command and control. Contemporary collaborative organizations shun autocratic and hierarchical leadership styles. However, organizations need people who bind groups together and represent their groups elsewhere in the organization. The commonly held belief that leaders need to ‘‘walk the walk and talk the talk’’ underscores the importance of honesty and credibility in communication. Leaders use meetings, e-mail, face-to-face discussion, tape, visual models, and the media to share their vision with others. Some use words, visuals, or their own example to articulate their vision and capture the imagination of others. Successful leaders share certain traits—integrity, intelligence, ambition, will, and optimism. All are persuasive communicators. They take the time during the routine tasks of a job to consider the bigger picture or the organization’s mission. Leaders are pragmatic, but most would not sacrifice people for power. They do not try to control everyone and spend a great deal of time developing consensus, rather than dealing with mistakes. In building consensus they seek win/win solutions as the most productive way to manage conflict.
356
CORPORATE AND ORGANIZATIONAL COMMUNICATION
Public Relations Corporate identity is more than the sum of these parts: mission statement; logo, letterhead, and annual report; advertising; internal perception programs; external communication and public perception of company image. People learn to recognize a company by everything it does, from the products and services it sells to its buildings and employees. Mergers and acquisitions, downsizing, and restructuring have treated corporate reputation and image rather roughly. From GM to IBM the face of business has changed dramatically. The need to build corporate identity through corporate culture has never been more important to a company’s survival than it is now, leaving the twentieth century and entering the twentyfirst. (For more information about public relations, see the section entitled ‘‘Communicating with the Media.’’) Corporate identity is demonstrated through a traditional relationship with various publics. With the change in the American economy and way of life from rural to urban during and after the industrial revolution, the role of the organization in a community changed. No longer was it sufficient for a business only to pay taxes and stay out of the affairs of the community. Its presence there had a strong impact on the lives of the people. Public relations has come a long enlightened way since its beginnings when wealthy company owners handed out nickels to the crowds. The role of public relations is now a strategic element in the business plans of most corporations. Public relations plans contain clearly articulated goals, methods, and measurements which coincide with larger corporate goals. Community relations, or outreach programs are now more closely allied to the core business. For instance, the public utility may sponsor and run a series of seminars at retirement homes and villages on coping with power outages due to a thunderstorm or hurricane. The same utility may offer courses for home owners in how to handle and repair electrical appliances safely. Other companies may donate services like a telephone bank or computers to help with fund raising. Often a company will set aside a day to help the local community by building a community playground or renovating a park. And more and more organizations are sponsoring a section of a highway for litter control. Their participation is indicated by signs along the roadside. Outreach programs also include corporate education programs in communities, schools, and universities. Sometimes outreach programs include adult courses in first-aid, water safety, crime prevention, and recycling. Company representatives often speak at high schools or colleges about a career in an industry and one at the company in particular. Blood drives for the local Red Cross depend on corporate participation, as does the United Way. Companies also offer in-kind gifts, such as used, but useful office furniture and equipment, to local charities and schools. During natural disasters, corporations are a valuable source of volunteers, equipment, food, clothing, and medical supplies. Such activities are often done with little or no fanfare depending on the corporate attitude toward volunteerism. Government relations involves meeting with local, state, federal, and in some cases international agencies to advocate for the corporation on matters in its interest. Some corporations provide legislators and agency professionals with position papers and information designed to inform and persuade
the agency. In the marketplace of ideas, such advocacy efforts often make the decision clear. In recent years individual corporations have avoided direct lobbying efforts in favor of joining an industry advocacy group that does that work for all companies in a given industry. A quick scan of the telephone directory in New York or Washington, D.C. under ‘‘Association of . . .’’ offers a snapshot of the thousands of groups formed to represent a particular viewpoint on an issue or industry. Because of abuses in the past in trying to influence government decision making, this area of corporate communications demands the highest ethical standards. Each company develops its own Code of Business Conduct which often includes standards and procedures for ethical practices with fellow employees and subordinates, with customers, with vendors, with the community, and with the government. Professional organizations and societies, such as the Public Relations Society of America, also issue standards of ethical practice for their members and for the profession or industry as a whole. The American consumer has become highly skeptical of business practices and intolerant of companies which operate unethically. Maintaining the highest standards for propriety and ethical behavior is the best approach to developing a reputation for honesty and integrity. Customer relations is considered the ‘‘front porch’’ of the corporation. How a corporation routinely treats customers and vendors and how it handles an angry customer’s complaint about a product or service form the foundation on which the corporation’s image is built in the minds of individuals. It can be inviting and cooperative or cold and impersonal. Successful companies make every effort to meet customers’ needs. The old cliche, ‘‘The customer is always right,’’ is not a cliche for most companies. It is an informing philosophy. It is also a central principle in the quality movements that have infatuated American businesses through the 1980s and 1990s. Satisfied customers come back again. Disgruntled customers do not, and they also tell at least ten others about their bad experience. Good customer relations depends on positive word of mouth. The service industry has made customer relations not only central to the company business strategy, but an art form. In a market-driven economy, companies with close relationships with their customers have a better chance of surviving difficult periods than companies which do not listen to their customers. Solid, positive relations with customers is a fundamental part of the quality revolution in America. Total Quality Management Total Quality Management, reengineering, and other change programs have become a major preoccupation of the business community in the United States, particularly those involved with technical goods or services. In the United States and throughout the world, the quality process derives from W. Edwards Deming whose theories of statistical quality control took root in post–World War II Japan, not in his native America. U.S. corporations embraced the quality process as the tool to use to combat the Japanese challenge for world industrial supremacy. In 1987 the Malcolm Baldridge National Quality Improvement Act made the trend official. The Act also established the Malcolm Baldridge Quality Award,
CORPORATE MODELING
similar to the Japanese Deming Award for quality given since the early 1950s. The European Community is now at work on similar quality initiatives called ISO 9000. To underscore the power of such programs in the United States, such giants as Xerox, IBM, Cadillac have pursued and won the Baldridge Award.
CORPORATE COMMUNICATION—MEETING THE CHALLENGE OF THE FUTURE What is Corporate Communication, and who does it? Corporate communication is the total of a corporation’s efforts to communicate effectively and profitably. It is a strategic action practiced by professionals within an organization or on behalf of a client. It is the creation and maintenance of strong internal and external relationships. The actions any particular corporation takes to achieve that goal depend in large part on the character of the organization and its relationships with its suppliers, its community, its employees, and its customers. (For more information see the section entitled ‘‘Communicating Change—Reengineering, Quality, Corporate Culture Programs.’’) Enormous changes in the workplace have had an impact on the communication practices of corporations and organizations. Avoiding print, broadcast, and electronic media no longer suffices as adequate communication policy or even effective corporate communication. A policy of developing strong channels of communication internally and externally has become a standard for most organizations. Not only has the nature of corporate communication changed over the last few decades, the type of people who create the company messages has also changed. The typical corporate communication professional is college educated with a degree in the humanities. A major in journalism, English, marketing, public relations, communication, or psychology is common. Generally, practitioners are loyal company people with a long record in the organization. This reflects the importance of the strategic nature of the organization’s communications. Often the professional has had a minor in economics or business, or depending on the company’s core business, some related technical discipline, such as engineering or computer science. This is in stark contrast to a previous generation of business professionals with backgrounds in law or accounting who have handled the company communications. Using a communication professional underscored another shift in corporate communications emphasis from a total focus on the investment community or shareholders, any owner of the company’s shares or stock, to a broader interpretation of community which now includes all ‘‘stakeholders.’’ A stakeholder is anyone who has a stake in the organization’s success— vendors, customers, employees, executives, the local barber, and the kid on the paper route. The explosion in the number and type of media available for communications has also had an impact on the communication professional. In the past, mastery of the written word was more than enough. Writing is still the core skill on which all others are built. But a mastery of the essentials of broadcast media is now essential to the creation of corporate messages for TV, radio, e-mail, cable news programs devoted to
357
business topics, multimedia and digital communications on computer networks, and public speeches. BIBLIOGRAPHY 1. M. B. Goodman, Corporate Communication: Theory and Practice, Albany: SUNY Press, 1994. 2. M. B. Goodman, Corporate Communication for Executives, Albany: SUNY Press, 1998. 3. T. Deal and A. Kennedy, Corporate Cultures, Reading, MA: Addison-Wesley, 1982; J. S. Ott, The Organizational Culture Perspective, Pacific Grove, CA: Brooks/Cole, 1989. 4. The Westinghouse Code of Ethics & Conduct, Westinghouse Corp., 1994. 5. G. Moore, Crossing the Chasm, New York: Harper, 1991. 6. M. B. Goodman, Working in a Global Environment, New York: IEEE Press, 1995. 7. S. Zuboff, The New York Times, D1, November 4, 1996. 8. V. Perugini, Anytime, anywhere, IEEE Trans. Prof. Commun., 39: 4–15, 1996. 9. B. Gates, The Road Ahead, New York: Viking, 1995 (rev. 1996). 10. N. Negroponte, Being Digital, New York: Knopf, 1995. 11. M. Zonis, Speech at Chicago Graduate School of Business’s Business Forecast ’97, New York, December 5, 1996. 12. G. Meyers, When It Hits the Fan: Managing the Nine Crises of Business, Boston: Houghton–Mifflin, 1986. 13. J. Ottman, Green Marketing, Lincolnwood, IL: NTC Business Books, 1993.
MICHAEL B. GOODMAN Fairleigh Dickinson University
COPYRIGHTS. See INTELLECTUAL PROPERTY.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5603.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Data Presentation Standard Article Joan G. Nagle1 1Westinghouse Savannah River Company, Retired, Aiken, SC Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5603 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (191K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Equations Tables Computer Printouts Expression of Numerical Information Metric Conversion Data Editing Nonprint Documents About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5603.htm15.06.2008 20:18:09
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
DATA PRESENTATION
711
DATA PRESENTATION Presentation of data is an essential function of engineering documentation. Errors or carelessness in the reproduction of equations, construction of tables, inclusion of computer printouts, and—particularly—citing of data in text can make documentation work ineffective or even meaningless. Guidelines for the effective presentation of data are provided in the following paragraphs. EQUATIONS An equation is a shorthand way of expressing a relationship or a process. It is like jargon; when we use this language with those who understand it, we have simplified communication enormously. How many words would it take to express the following:
σ 2x
1 = n
(X
)
n
X 2i
− nX
2
(1)
i=1
On the other hand, what possible meaning does it have for the uninitiated? Equations are significant in topical reports, articles, and books dealing with the derivation of relationships and generalizations from empirical data. We must, of course, explain the relationship and its derivation in the text of the report: this is what we did to get the data, this is how we manipulated and analyzed it, this is what it looked like (graph), and—then—this is the resulting function. Long derivations, or series of equations, may be put in an appendix with only the most important ones included in the body of the report. An equation may be used in a manual or other instructional material to help the reader understand the principle behind a device or a process. However, we must consider carefully just how much principle this specific audience needs, and, on the other hand, where we approach overkill (and resulting confusion). Also, a manual user may need an equation to derive a number that is required for operation of the equipment or process. For example, the user must solve for x in the equation x ⫽ 1.25 兹A, and then set an instrument using the value obtained. Equations employed for purposes like this should be simple and clearly explained in text, keeping in mind the mathematical and technical ability of the potential user. If the function is not a simple one, it is better to substitute a
This article is based on the Handbook for Preparing Engineering Documents, IEEE Press, 1996, pp. 107–118, 205–206, 296–301, 1996 IEEE.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
712
DATA PRESENTATION
Text Explanation Testing was performed off site on 856 samples in 199X. Only 16 percent (133 samples) of the 856 samples passed the first test. 84 percent (723 samples) of the original 856 failed the evaluation at a cost of $107,727. The failed samples were submitted for a second test with an 80-percent acceptance rate. The remaining 20 percent of the samples that failed the second test cost $38,304, for a total first- and second-test failure cost of $146,031. In contrast, the on-site testing facility has tested 677 samples during the current year. Projections of the 199Y testing activities have been performed so the cost can be more accurately compared. The 856 projected samples subjected to the first test rendered a 91-percent (776 samples) acceptance. The remaining 9 percent (80 samples) failed, costing $5,280. The first-test failures were submitted to the second test, in which 52 percent (41 samples) were accepted. The balance (48 percent, 39 samples) of the samples also failed the second test at a cost of $4,563 or a total cost for first- and second-test failure of $9,843.
Table Explanation First-Test Failures
Second-Test Failures
Testing Agency
Number of Samples Tested
Number
Percent
Cost ($)
Number
Percent
Cost ($)
Total Failure Cost ($)
Off site On site
856 856a
723 80
84 9
107,727 5,280
144 39
20 48
38,304 4,563
146,031 9,843
a
Projected on the basis of current year-to-date testing.
Figure 1. Example of reduction of text to table.
graph of the function (with a full grid) and instructions for finding the needed value from the curve. There are two ways to set equations in a document: inline and displayed. (Inline equations are also called shilling equations.) An inline equation is shown in the preceding paragraph. Inline equations are usually short, uncomplicated expressions, with few or no outsized symbols (like summation signs and braces) which would introduce a nonstandard line spacing. Fractional and quotient expressions in these equations are written in shilling form, that is, with the slash symbol: x/y ⫽ 1/2 (a ⫹ b). They should never be broken at the end of a line; if a break is unavoidable, the equation should be displayed. An equation that is too long to fit on one line, even if displayed, should break at an operation sign (for example, plus or minus). The operation sign should be repeated on the next line for readability. All lines of the same equation should be aligned flush left under the operation sign. The first equation shown in this article [Eq. (1)] is a displayed equation; that is, it is set on a line by itself. If there are several equations in a work (say, more than three) and a need to refer to them in text, they should be numbered (like tables and figures). The equation number is often set in parentheses at the right-hand margin. When a displayed equation is introduced in text, it is good practice to use an introductory sentence, followed by a colon, and then insert the displayed item: The relationship between x and y is shown by the following equation: x = A + 2y
It is sometimes preferable to use, instead, whatever punctuation would be logical if the action of the equation were not displayed—that is, described in words, rather than mathematically. For instance, Solving for y gives y = (A − x)/2 and this is the definition of the horizontal scale shown in Fig. 4-1. Chemical equations are treated in much the same way as mathematical equations. Most of today’s word processing programs give all of us the capability to set complex mathematical material with incredible ease and with precision. It is well worth the engineer’s while to learn to use whatever equation software is available. The average typist/word processing operator is likely to make errors of content in this kind of work. But the engineer understands the function of summation signs and limits; integrals; over- and underbars; delimiters like parentheses, brackets, and braces; as well as Greek letters and other symbols. With today’s word processing programs, it is much easier to enter this information ourselves than to explain calculus, often handwritten, to a word processing operator. TABLES Like an equation, a table can save a lot of words, as shown by Fig. 1. In this case, an engineer wrote out the text explana-
DATA PRESENTATION
tion in longhand and gave it to the group secretary to type. Many hours and much discussion/argument later, his colleagues finally understood what the engineer was saying, and one of them constructed the table explanation. The table is easier to comprehend. It shows the large difference in the costs of off-site and on-site testing more emphatically. And it takes up less space/less of the reader’s time. Tables, like graphs and equations, are ways we show relationships in engineering documentation. Tables, in fact, show multiple relationships. The table in Fig. 1 may be used to discern the relationship between first-test failure and secondtest failure at the two testing agencies, to compare rates of failure between the agencies, and of course to compare costs. A table communicates information better than text when the following conditions are present: • There is more than one set of related data. • The data sets reference the same parameters—for instance, meter readings (on the same unit-of-measurement scale) at various locations, or time intervals. • There is reason to be interested in comparison of the data—for instance, which location had the highest reading?
On the other hand, if the information in the table can be expressed more simply by words, a graph, or a chart, a table should not be used. A graph, for example, may not give so much data, or such exact data, but it gives the reader an instant picture of the relationships under discussion.
Table Components and Assembly To discuss what makes a table work well, we need to name its parts (Fig. 2).
Spanner head Stub head
Column Column Column Column head head head head Body spanner Column entry
Boxhead
713
Table 1. Example of Unboxed Table Meter Reading at Indicated Time (V) Test
1 hr
2 hr
3 hr
1 2 3
51 78 26
60 83 32
61 85 33
The following guidelines govern data entry in tables: • Information always reads down from the boxhead, all the way to the end of the table. • Information reads down from the stub head. (That is, the stub head describes information in the stub entry column, not the information in the column heads.) • Information controlled by the stub head reads across. Note that a table may be boxed, as shown in Fig. 2 (that is, have a border around it), or unboxed (Table 1). The use of borders and lines is discussed below, under ‘‘Table Format.’’ Like graphs, many tables show a relationship between independent and dependent variables. Usually, the independent variable (time, for example) reads horizontally, across the table; the dependent variable (test number, for example) reads vertically. Care must be taken to ensure that the column and spanner heads describe the column entries exactly. For instance, Table 1 might have this spanner head: ‘‘Meter reading at indicated time (hr).’’ But it would not be accurate because the entries are in volts, not hours. Units of measurement are just as important to tables as they are to graphs and charts. Every column or spanner head needs a unit of measurement (or some explanation if the values are arbitrary numbers rather than measurements). It is an aid to both clarity and economy to put the unit in the head rather than in the entries: This is better . . .
than this . . .
Boiling point (⬚C) 100 197 290
Boiling point 100⬚C 197⬚C 290⬚C
Standard practice for use of a dual system of measurement [for example, both SI (metric units) and engineering (English) units] is covered below, under ‘‘Metric Conversion.’’ Spanner heads help to combine data and avoid repetition. Instead of repeating the unit of measurement after the column head Boiling point and after Freezing point, we can introduce a spanner: Temperature constants (◦ C) Boiling point Freezing point
Body spanner Column entry
Figure 2. Parts of a table.
A good question to ask is ‘‘How much data does this description apply to, and what is the clearest way to show that?’’ A lengthy description that applies to a small amount of data could be placed in a footnote. Sometimes the stub column is hard to define with a stub head because the stub column entries do not seem to be clas-
714
DATA PRESENTATION
sifiable. Often there is an ‘‘umbrella’’ word that will cover the list, like parameter or function or instrument. If we cannot find any common denominator, we may have to rethink the whole table. Do the data sets truly reference the same parameters? If not, we need to present the data in some other way. We often see a column of sequential numbers before the stub; that is, the entry rows are numbered from 1 through n. Unless these numbers truly describe some feature of the data (such as item number or test number), or unless there is a real need to refer to certain rows of the table by number designator, this column should be eliminated. In the former case, where the numbers are part of the descriptive data, the column should probably follow the stub rather than precede it. Using singular nouns to describe column entries is a commonly accepted convention in table construction. Examples are meter reading, tube diameter, and voltage. Body spanners are useful ways to divide data sets—for instance, data from the United States, data from Europe, data from Asia. Formatting can also be used to divide data for clarity and ease of comprehension (see ‘‘Table Format,’’ below). The table title should be both descriptive and concise. For instance, instead of titling a table merely ‘‘Cumulative use factors,’’ we might call it ‘‘Cumulative use factors for various stress ranges.’’ Tables are also used to summarize sets of nonnumerical data, as shown below: Component Heat pump Pressurizer Cooling tank
Location Area A Area B Area C
Informal tables are essentially lists. They have no number or title; they may have no stub heads, as shown below: Advantages
Disadvantages
Low cost Design simplicity
Long lead time High consumables requirement
A matrix is a special kind of table in which each item in the stub column interacts with each item in the column heads. The matrix in Fig. 3 shows distances between plants. Because it is symmetrical (the same parameters are shown in stub column and column heads), certain spaces are blank. That is, there is zero distance between pairs like Bates and Bates. Also, each entry (value) appears twice, since the distance between Bates and Carlin is the same as the distance between Carlin and Bates. The matrix can be read either across and down or down and across. In a nonsymmetrical matrix, there is no repetition of data. Such a matrix might show currency conversion factors, in
Ba
tes
Ca
rlin
12
Bates Carlin
12
Engle
28
En
gle
28 23
23
Figure 3. Example of a matrix.
which the value for dollars expressed in yen is not the same as the value for yen expressed in dollars. It is fairly common to see blank cells in symmetrical matrixes, since most readers understand the principle behind this kind of data presentation. However, some explanation is recommended for missing, inadequate, or negligible data in other tables. We can use the word none or N/D (no data), or insert an em dash (—), to show that we have not simply forgotten to fill the cell. N/A is commonly used for not applicable. It is good practice to footnote N/A or N/D the first time it is used. Footnotes are often used to explain a missing or a seemingly anomalous table entry, to qualify an entry for which the conditions varied from those in the remainder of the table, or to expand on an abridged entry. However, if a table accumulates too many footnotes, we again need to rethink the appropriateness of a table in this application. The data may simply not match up well enough for tabular presentation. Table Placement If the table is to be used as the document is read, it is best to include it with the discussion. Many tables, however, are supplemental information; that is, the reader can understand the text without referring to them but may want to consult them later. In these cases, tables can be grouped in an appendix (and referred to in the discussion). A third (and recommended) alternative is to summarize the data in a short table in the body of the document and provide the full body of information in the appendix. When a table is used, it is referred to in the text by number (except for informal two- or three-line tables inserted in text like spot drawings). The necessary amount of explanation or comment is provided. For instance: The voltage fluctuations were recorded at 10-minute intervals and entered in column 3 of table 7, which shows that the fluctuations were most marked between 8:15 and 11:20 AM. Table Entry Electronic production of tabular material is a task most engineers try to avoid. It is time-consuming and tedious. However, even if data entry is done by someone other than the document creator, we need to understand how tables are created and formatted. Then, at least we know what to ask for. It is even better to study the word processing manual to the extent that we can insert a program-generated table into the document. When we ‘‘insert table,’’ we are asked how many columns and how many rows the table should contain. Most often, we do not know this exactly at the outset. We can enter an estimate; columns and rows can be added or deleted later. The table will usually appear in our document extending across the entire page width, divided equally into the number of columns we have specified. If we do not want a full-page table, we can indent it right and/or left. We usually do not want equal-width columns; we typically want a wider column for the stub and narrower columns for the data. Column widths are easily adjusted by highlighting the entire column and entering the desired width when prompted. Then, we can
DATA PRESENTATION
usually tell the computer to take us to the next (or the preceding) column, resize that one, and so on. It may take some arithmetical work and fine adjustment to get the right proportions. When the table has been set up the way we want it, including formatting (below), we can proceed to enter data. Alternatively, we can give this template (on disk, perhaps) to a typist/word processing operator for the bulk of the data entry task. (Of course, it will be necessary to proofread the result very carefully.) Tables generated by a spreadsheet or other data analysis program and satisfactory for use as is may be entered electronically. That is, the table can be converted to the word processing program, imported into the word processing program, electronically cut and pasted into the word processing program, or scanned onto frame pages. The caption (table number and title) should be entered on the frame page during document creation; the page number will be entered automatically. Table Format Once the content of the table has been sorted out into an accurate representation of the facts we want to present, we can format the table for optimum readability. We have all seen tables that featured less-than-optimum readability. The type was too small and the lines too crowded; we could not find our way from one side of a line to the other; we forgot what the column heads and footnotes were when we got to the continuation on the next page. Given that the tabular data we present is very often the heart of the report, it is worth a little time and effort to package tables for maximum effectiveness. The investment pays off especially well when we have a series of similar tables, in which case we can simply copy the last one and replace the entries with the current material. Before we start entering heads and data, we want to give the computer some formatting information. Many word processing programs include a variety of automatic table formats from which to select when entering the table in a document. Otherwise, we must first specify type style and size. The stub head and column heads need to stand out, so we can make them bold, italic, or bold italic. Stub and data entries are usually in regular (that is, not bold, not italic) type style. All material in a table (heads and body) should be entered in the same size type. The size of the type depends to a large degree on the size of the table. If we must fit a great deal of data into the table, especially if we need many columns, we will have to use a smaller size than the body text. In the body of a document, table type should be no more than 2 points smaller than the body text. Tables that require very small type are probably appendix material. (Note that although 8point type may be readable, it does not fulfill the legibility requirements for microfilm work.) Text entered in spanners may be set in bold, italic, or bold italic; however, the style chosen should not be the same as that chosen for the stub and column heads. (It should be a degree less emphatic; for instance, if the heads are bold italic, the spanner can be plain italic.) All-capital letters may be used in spanners if the line of type is fairly short (less than the full width of the table).
All heads and entries flush left:
715
All heads and entries centered; stub column flush left:
Figure 4. Two examples of table entry alignment.
Entries in the columns may be aligned flush left or flush right, centered, or (for numbers) aligned on a decimal point. Heads are often centered in their columns. Stub entries are almost always aligned left. Subcategories of stub entries may be bulleted and/or indented, like regular text. Two examples are shown in Fig. 4. When column entries consist of one or two words, their alignment matches that of the heads above them. More lengthy text entries are usually entered flush left. Number entries are usually centered on the decimal point, literally or figuratively. In the latter case, when there is no decimal point in the number, we can simply tell the computer to use right alignment. If there is a decimal point (in any or all of the entries), we must set a decimal tab at the desired point in the column, and then press the Tab key before each entry. Text in spanners is aligned left if they span the entire table (including stub column); it may be centered if the spanner covers only the data columns. Column heads should be ‘flush bottom’—that is, the bottom lines of all head entries should be aligned. This may necessitate adding carriage returns (extra lines of space) before some lines. It is not a good idea to set column heads vertically or at an angle, since type turned on its side is hard to read. A table is a box of data, whether it is enclosed in a visible box (borders) or not. The choice between boxed and unboxed tables is most often a matter of personal preference. Even in an unboxed table, however, the boxes are there. If the table has been created with the table function of the word processor, the box borders can be seen, faintly, on the computer monitor. With the word processor, a visible box is created by turning on the borders of the table and the columns and/or rows. An unboxed table, if it is anything more than the simplest two-by-two or three-by-three arrangement, needs some kind of treatment to guide the readers’ eyes across and down. At a minimum, there should be a divider between the heads and the body of the table (perhaps a double line). Additional horizontal lines (rules) are needed after (and, preferably, before) spanners. A very long table should be divided by horizontal rules, perhaps every five or ten rows; an additional line of space may be used instead. A rule at the bottom of the table helps to separate it from text. Vertical rules are optional; they help to divide columns of data, especially very narrow columns. If desired, tint blocks may be used in place of rules (Fig. 5). A tint block is an area of color or a shade of gray under a segment of text, illustration, or table to highlight it or set it apart. To add a tint block to a row, column, or cell, we turn on the shading feature of the word processor.
716
DATA PRESENTATION
At least, the printouts should be culled from the body of the document, where they are likely to interrrupt the reader’s train of thought, and put in an appendix for reference as needed. However used, these compilations of data need identification, with a caption (table number and title). They usually need to be reduced in size and pasted down on (or scanned onto) the page bearing the caption. Figure 5. Example of use of tint blocks in a table.
EXPRESSION OF NUMERICAL INFORMATION Shades for tint blocks should be strong enough to copy easily and light enough so that the text can be read clearly. Finding the best shade (percentage) is often a matter of trial and error; try a low percentage (5% or 10%) first and check it out by printing a sample page. All type set inside a tint block should be boldface. In a simple two-column table (like a table of contents or index), dotted, dashed, or solid line leaders are sometimes used to guide the readers’ eyes from one column to the other. These tables are usually set up with tabs rather than with the table function, and the desired leader can be chosen when the tab is set. Leaders should only be used when they serve an important function; otherwise, they clutter up the page unnecessarily. The caption (that is, the title plus the designator, such as Table 4-1) may be centered over the table or set flush left. It should be in the same size type as body text but may be bold and/or display type. The table number (usually including a chapter or section designator) should be on the first line, with the table title on the second, for easy location and scanning. For footnote designators, we can use letters since table entries are usually numerical. They should, of course, be set as superscripts. It is best to keep the number of footnotes to a minimum. Tables frequently spill over onto a second page, or further. The reader needs to see the full caption (number and title) on each continuation page. It is common to put the word continued or the abbreviation cont in parentheses after the table number. All the column heads of the table should also appear on continuation page(s). If spanners are used, the most recent spanner should be printed between column heads and data, again followed by continued or cont in parentheses. Table footnotes should be repeated on each page of the table to which they apply. Thus, the continuation page of a table might include only notes b, d, and f, which is all right. More information on table formatting may be found in the article entitled DOCUMENT/INFORMATION DESIGN in this encyclopedia. COMPUTER PRINTOUTS Computers are very good at constructing tables, and in fact, computer printouts are often the source of the data we want to present. This facility tempts us to stuff engineering documents with huge batches of computer printout. Sometimes, this is helpful or even required; more often, it is not. If we must include raw computer data, perhaps we can do it more effectively and/or economically by means of diskette, CDROM, or microfiche.
Numbers can be expressed in different ways—Roman and Arabic, superscript and subscript, rational and irrational, whole integers or derived to the 12th decimal place. We need to be consistent in expressing numerical information, just as we are with textual matter. Superscripts and subscripts are common in engineering documentation, but they wreak havoc with line spacing. When used, they should be set in a smaller type size (at least two points smaller), or the numbers thus modified should be displayed like a displayed equation. Super- and subscripts are set with the character format function of the word processor. The facility of the computer in generating numbers has led to a decline in the consideration of significant figures. The computer will keep dividing 11,624,581 by 346, at least until its display capacity has been reached. But do we really want 12 figures after the decimal point? Not just because we have them; these figures may not be significant. When we include figures with no significance, we lead the audience to assume more accuracy than the work provides. The following guidelines cover most situations: • The number of significant figures used is a function of the accuracy of the measuring device. For instance, when we measure a room with a yardstick, the result is not any more accurate than the eighth-inch marks on the stick. Thus 10 feet, 6 inches, for instance, would be the accuracy limit of such a measurement. However, in machining, we can use micrometer calipers to measure fractions of a millimeter. • The number of significant figures used is a function of the counting method. If we count every person who enters an arena by means of a turnstile, we are justified in presenting the total attendance as 47,631. However, if we count the number of people standing on a 10-foot-square plot and multiply by the number of plots, the result is a lot fuzzier. We have factors of error in both the randomness of the sample and the proportion of the sample to the whole; the result would probably only be significant to one or two figures—50,000 or 47,000. • The number of significant figures used is a function of the data used to create the number in question. When we divide 20 by 3, our answer is no more accurate than 6.7 (one digit more than the smaller number in the original data). But when we truly have original data to the third decimal place, we can divide 20.000 by 3.000 and present the answer as 6.6667. (Note that a zero to the right of the decimal point counts as a significant figure; those to the left generally do not.) If we divide 20.000 by 3, however, our answer is no more accurate than the smaller number in the original data; again the answer is 6.7.
DATA PRESENTATION
On the other hand, in addition and subtraction, we can use an answer that corresponds to the higher or highest number of significant figures in the calculation. For instance, 4.327 plus 8 equals 12.327. • Tolerances, or margins of error, should be expressed to the same level of accuracy as the measurement they qualify. For instance, a machining tolerance may be given as 31.000 ⫾ 0.005, but not 31 ⫾ 0.005. • We can round off—that is, arbitrarily reduce the number of significant figures presented—if we do it consistently throughout a calculation or series of calculations. Even the Internal Revenue Service permits us to drop cents from our computation as long as we are consistent. A word of caution about expressions of tolerance, noted above. When we are very sure that our audience is made up entirely of other engineers, the plus-or-minus symbol (⫾) is acceptable. However, when we require the reader/user to calculate the limits of a tolerance or range, we are creating a potential for error. In a manual or procedure intended for use in the field, this is not acceptable. Thus, we need to give tolerances and limits in immediately recognizable terms, using the format 具nominal value典 具units典 (具lower limit典 to 具upper limit典). For example: 125 vdc (115 to 135) Manual/procedures users should never be required to calculate percentages. For calculations like this, we can provide a graph or table. Similar considerations for in-the-field users include using the same measurement units that appear on the user’s instruments. (That is, we do not write about foot-pounds if the equipment shows inch-pounds.) Also, we use values that are readable on the user’s instrument. One-half the distance between markings is about the limit of instrument-reading accuracy. Scientific notation for the expression of very small or very large numbers involves the use of powers of ten, for instance, 5.7 ⫻ 10⫺8. These numbers are usually written with no more than one figure to the right of the decimal point unless there is justification for doing otherwise. It is preferable, but not essential, to use a symbol font for the ‘times’ mark, rather than the ‘x’ in a normal text font. It is not good to use an asterisk (*). METRIC CONVERSION More and more often, in today’s global community of science, technology, and business, we are asked to furnish measurements in SI (Le Syste`me Internationale d’Unite´s). SI closely resembles the centimeter-gram-second and meter-kilogramsecond metric systems we may have learned many years ago in basic science courses—but tidied up, standardized, and unified. Usually, we must also present the data in standard/ British/engineering units. This requirement can present special problems. The primary principle here is consistency; whatever usage we adopt should remain the same throughout the document (or series) of documents. The following usage is recommended: • Choose one unit system to be primary and one to be secondary. In engineering documents prepared for U.S.
717
readers, the engineering units are usually primary and the metric secondary. (In this encyclopedia, however, the reverse is true.) • Express all numbers in the primary system followed by the corresponding value in the secondary system, in parentheses: 6 in. (152.4 mm) • If the value is already contained within parentheses, change the outer marks to brackets: [The original length was 6 in. (150 mm).] Or change the parentheses to brackets and the outer marks to parentheses. • Observe the principles given above for matching the number of significant figures between systems. However, it must be noted that the base unit in one system may be a great deal larger or smaller than the base unit in the other system. For instance, in the above example, millimeters are much smaller than inches; for that reason, the conversion (1 inch ⫽ 25.4 millimeters) produced a result (152.4 mm) that implied more accuracy than is warranted by the original measurement. Engineering judgment is required here. We might use meters instead of millimeters, with the following (more reasonable) result: 6 in. (0.15 m) Here, since a meter is so much longer than an inch, we have added one significant figure to the metric equivalent. • In table headings, where the unit of measurement is usually given in parentheses following the column head, we again change to brackets and give the secondary unit in parentheses: Temperature [⬚F (⬚C)] • Similar practices are followed in illustrations. In callouts (labels within the illustration) and dimensions, values are expressed in units of the primary system followed by expression in the secondary system, in parentheses. In graphs, units in the primary system are shown in labels at right and at the bottom; units in the secondary system are shown in labels at left and at the top. Alternatively, both values may be shown on the same scales, but this can be confusing. Both these practices are shown in Fig. 6. The following guidelines apply specifically to the use of SI measurement units: • Unit names are not capitalized even when they derive from the name of a person. Unit symbols are not capitalized unless they derive from a proper name. For example, Unit
Symbol
meter
m
newton
N
pascal
Pa
Note that abbreviations for SI units are commonly referred to as symbols. • The plural of an SI unit is formed by adding s, except for hertz, siemens, and lux, which are the same in both singular and plural, and henry, for which the plural is henries. • SI prefixes and symbols at various multiplication factors are shown in Table 2.
718
DATA PRESENTATION
20
0
kg/m3
Length (m) 1.0 1.5
0.5
2.0 8
6 10 4 5
0
Weight (kg)
Weight (lb)
15
2
0
2
4
6
8
0
Length (ft) (a) Preferred method
Weight [lb (kg)]
20 (9.1) 15 (6.8) 10 (4.5) 5 (2.3) 0
0
2 (0.6)
4 (1.2)
6 (1.8)
8 (2.4)
Length [ft (m)] (b) Alternative method Figure 6. Use of dual measurements in graphs.
• The final vowel in the prefix is retained in all but two cases: megohm and kilohm. No space or hyphen is used between prefix and root. • Certain conventions govern the use of term: The prefixes hecto, deka, deci, and centi are accepted SI forms, but they are rarely used. (Centi does, however, appear frequently in non-SI work.) The same unit should be used for all values of the same quantity in tables and discussions of these quantities. Specific units are common to different fields and should be used even when the order of magnitude is very large or very small. For example, kPa
for fluid pressure
MPa
for stress (except in very weak materials, for which kPa may be more appropriate)
GPa
for modulus of elasticity
for mass density (except for fluids generally measured in liters, for which g/l may be used; the numerical value is the same)
When it is possible to do so within these conventions, units chosen should result in numerical values between 0.1 and 1000. The recommended unit for fluid pressure (barometric pressure, gas pressure, water pressure, and hydraulic pressure) is kilopascal (kPa). There are two exceptions: in the field of air-conditioning, where pressure differentials in air ducts are measured in pascals (Pa), and in measurement of high vacuum in terms of absolute pressure, where Pa and mPa are more convenient. The term bar is a metric unit but not an SI unit (although accepted internationally for a limited time in special fields because of existing usage). Absolute and gauge pressures are specified by using the words, not the abbreviations a and g; for example, at a gauge pressure of 7kPa at an absolute pressure of 25 kPa or 7 kPa (gauge) 25 kPa (absolute) • The word per is used to form the name of a compound that is a quotient, and a slash (/) to form the symbol; for example, kilometer per hour (km/h). To avoid ambiguity, use one of the following forms for combinations of symbols: meters per second per second— m/s2 or (m/s)/s. • A compound unit that is a product is written with a space or hyphen between the words, and a multiplication or product dot (a dot at the center of the line space) between the symbols. For example, newton meter or newtonmeter (N ⭈ m). The product dot can be found on the symbol font of most word processing programs. If it is not available, use a period set as a superscript. In either case, the dot should be set in boldface. Table 2. SI Prefixes and Symbols for Unit Multiplication Factors Multiplication Factor
Prefix
Symbol
1,000,000,000,000,000,000 1,000,000,000,000,000 1,000,000,000,000 1,000,000,000 1,000,000 1,000 100 10 0.1 0.01 0.001 0.000001 0.000000001 0.000000000001 0.000000000000001 0.000000000000000001
exa peta tera giga mega kilo hecto deka deci centi milli micro nano pico femto atto
E P T G M k h da d c m 애 n p f a
Term one quintillion one quadrillion one trillion one billion one million one thousand one hundred ten one-tenth one-hundredth one-thousandth one-millionth one-billionth one-trillionth one-quadrillionth one-quintillionth
DATA PRESENTATION
719
Table 3. Metric (SI) Units and Conversion Factors To Convert to SI . . . SI Unit
acceleration area
meter per second squared square kilometer square hectometer (hectare) square meter
m/s2 km2 hm2 m2
square centimeter square millimeter farad kilogram per cubic meter ampere ohm volt joule
cm2 mm2 F kg/m3 A ⍀ V J
kilojoule megajoule kilowatt hour kilonewton newton lumen kilogram gram milligram newton meter radian degreea kilowatt watt kilopascal coulomb revolution per second revolution per minute revolution per hour decibel meter per second kilometer per hour megapascal kelvin degree Celsius millipascal second
kJ MJ kWh kN N lm kg g mg N⭈m rad ⬚ kW W kPa C r/s r/min r/h dB m/s km/h MPa K ⬚C mPa⭈s
square millimeter per second cubic meter cubic centimeter cubic millimeter
mm2 /s
ft/s2 mi2 acres yd2 in2 in2 in2 N/A lbm/ft3 N/A N/A N/A Btu ft-lb Btu Btu N/A lbf lbf candle-power lb lb oz N/A degrees N/A N/A N/A psi ampere hours N/A N/A N/A N/A ft/s mph psi —b ⬚F ⫺ 32 centipoise ft2 /s ft2 /s
m3 cm3 mm3
yd3 in3 in3
capacitance density electric current electric resistance electromotive force energy, work, or quantity of heat
force luminous flux mass (commonly, weight)
moment of force (torque) plane angle power or heat flow rate pressure (or vacuum) quantity of electricity rotational frequency
sound pressure level speed (velocity) stress temperature or temperature interval viscosity (dynamic) viscosity (kinematic) volume
a b
Symbol
Multiply The Value In This Unit . . . By This Factor . . .
Quantity
(3.048 2.590 (4.047 (8.361 (6.452 6.452 (6.452
⫻ 10⫺1) ⫻ 10⫺1) ⫻ 10⫺1) ⫻ 10⫺4) ⫻ 102)
(1.602 ⫻ 101)
(1.054 ⫻ 103) 1.356 (1.054 ⫻ 106) (1.054 ⫻ 109) (4.448 ⫻ 103) 4.448 (1.257 ⫻ 101) (4.536 ⫻ 10⫺1) (4.536 ⫻ 102) (2.801 ⫻ 103) (1.745 ⫻ 102)
(6.895 ⫻ 103) 3600
(3.048 ⫻ 10⫺1) 1.609 (1.450 ⫻ 106) (5.555 ⫻ 10⫺1) 1 929 (6.452 ⫻ 102) (7.646 ⫻ 10⫺1) (1.639 ⫻ 101) (1.639 ⫻ 104)
In SI, fractions of a degree are expressed as decimals, not in minutes and seconds. To convert to degrees kelvin, calculate Celsius value and then add 273. No degree symbol is used with kelvin values.
• The choice of SI unit for weight depends on whether we mean mass or force of gravity. The unit of mass is the kilogram, and the unit of force is the newton. Whenever the word weight or weigh is used, the common meaning is mass (especially in expressing capacity ratings, as of a bridge). However in some fields, weight is defined as the force of gravity acting on an object; in this case, the SI term newton would be used. For example, on earth, the weight of a 10-kilogram mass is about 98 newtons.
A full treatment of SI may be found in ANS/IEEE Standard 268-1872 and ISO 1000-1981. Table 3 lists SI units for various quantities, their symbols, and factors for converting some common engineering units to SI.
DATA EDITING There is a high potential for embarrassment in the presentation of data. It is so easy to enter a wrong digit, and spell-
720
DATA RECORDING
checking cannot catch it. We must check all statistics, dates, phone numbers, and the like. Words like maximum and minimum are red flags: is the value referred to truly the maximum or minimum? Whenever a column of data has been added up and the total presented, we want to check the addition. Particularly, when a column shows a series of percentages, we want to ensure that they add up to 100 (or else explain why not). (However, by custom, the number 100 is not entered in the total line, if any.) In fact, we want to check any mathematical operation for correctness; not only may an arithmetic mistake have been made, but a component may have been changed somewhere along the line, making the original result of the operation incorrect. Orders of magnitude and other exponents are often entered incorrectly—with the wrong number, or missing a minus sign. Superscripts and subscripts are also sources of error because of their lack of inherent meaning and the fact that they are set in smaller type than text. In tables, are the stubs and headers complete, correct, succinct, accurate? Are the entries correct, legible? Equations need to be checked, rechecked, and then rechecked again, especially if they have been entered by clerical personnel. This check is best done by the professional who was originally responsible for inserting the equation.
NONPRINT DOCUMENTS These editing precautions are especially important in the case of audiovisual aids (overhead projections, slides) shown with oral presentations. Great care should be taken in projecting tables before an audience; only very brief and very, very clearly formatted tables should be used. (More complex data may be given to the audience in the form of a printed handout.) Additional information may be found in the article entitled ORAL PRESENTATIONS in this encyclopedia. The creation of electronic documents, such as software help files and manuals on the World Wide Web, is a complex process. The computer display is clearly different from a paper book. The ways in which users access, assimilate, and process electronically presented information incorporate an entirely new set of human factor engineering considerations. It is recommended that those who are involved in production of electronic documents refer to the article ELECTRONIC DOCUMENT PRODUCTION/REVISION in this encyclopedia, or to the evolving literature on this subject, including the first two references (below).
BIBLIOGRAPHY The material in this article has been compounded from various sections of Handbook for Preparing Engineering Documents, written by Joan G. Nagle and published by IEEE Press (Piscataway, NJ, 1996). The book also covers overall design of documentation, creation of text and illustrations, document testing, and document production. It includes tables of recommended usage in such areas as abbreviations, capitalization, compounding, numbers/numerals, punctuation, references, and spelling symbols. Other references include the following: S. Carliner, Elements of editorial style for computer-delivered information, IEEE Trans. Prof. Commun., 33: 1, March 1990.
W. Horton, Designing and Writing Online Documentation: Help Files to Hypertext, New York: Wiley, 1990. R. Krull (ed), Word Processing for Technical Writers, Amityville, NY: Baywood, 1988. The Chicago Manual of Style, 14th rev., Chicago: University of Chicago Press, 1993. P. Rubens (ed), Science and Technical Writing—a Manual of Style, New York: Henry Holt, 1992.
JOAN G. NAGLE
DATA PROCESSING FOR MANUFACTURING. See MANUFACTURING PROCESSES.
DATA PROCESSING FOR REMOTE SENSING. See REMOTE SENSING GEOMETRIC CORRECTIONS.
DATA PROCESSING INDUSTRY. See INFORMATION TECHNOLOGY INDUSTRY;
SOFTWARE HOUSES.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5604.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Document and Information Design Standard Article Janice C. Redish1 1Redish and Associates, Inc., Bethesda, MD Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5604 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (194K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Exploring what Makes Documents Successful Using a Process Model as a Job Aid for Successful Documents Planning Successful Documents Working out the Logistics of a Document Project Focusing on the Content that users Need Organizing to help users Writing so users can Understand what they Find Formatting (The Other Meaning of “Document Design”) Reviewing, Evaluating, and Revising Drafts About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5604.htm15.06.2008 20:18:30
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
ELECTRONIC DOCUMENT PRODUCTION
Although fax was originally used to transmit news photographs, it was later widely used in business and industry for a variety of applications.
OVERVIEW
It is a quicker and usually less expensive alternative
An electronic document, today more commonly called an online document, is one that is created, transmitted, received, and read with a computer or some other electronic encoding and decoding device. From the very beginning, the electronic document has owed its existence to technologies developed by electrical and electronics engineers. Moreover, engineering professionals, as well as business and general users, typically use electronic documents as one of their basic methods of professional communication.
It is also an easy way to transmit documents written
to postal or express delivery services.
in Japanese, Chinese, and other nonalphabetic languages. It is an electronic document exchange technology that doesn’t require computer access. It is of poorer quality than originals due to data loss inherent to input and output devices when the input is paper rather than electronic files.
The Origins of Electronic Documents
Contemporary Electronic Documents
The earliest electronic documents were made possible by the telegraphs invented simultaneously in 1837 by Samuel F. B. Morse in the United States and by Charles Wheatstone and William F. Cooke in Great Britain. In its original form, the telegraph transmitted messages by electrical pulses over wire. Morse devised a code consisting of the letters of the alphabet, numerals, and basic punctuation marks that was commonly used to encode messages at the sending end and to decode them at the receiving end. Over the next hundred years, telegraphy was significantly improved to use wireless radiowave transmission. Eventually, Morse’s code system was replaced by teleprinting (teletype), which is still occasionally used, and by telegraphic facsimile reproduction, which was replaced in the 1980s by telephone facsimile transmission. Teleprinting, which is limited to text-only messages, uses a keyboard at the sending end to encode each character of the message into electronic impulses. The receiver decodes and prints the message. Teletype systems were most often used by news organizations to gather and disperse information. Beginning in the late 1950s, many general business customers took advantage of teleprinting systems such as the Telex, which employed special lines offered by telephone services. Since the 1970s, however, the news media have almost universally replaced teleprinting; first with satellite transmission and later with electronic mail, and business teleprinting systems have essentially been replaced by telephone facsimile and electronic mail. Although no longer used, telegraphic facsimile systems are notable because they had the capability of sending and receiving both text and images such as photographs or drawings. Telephone facsimile transmission, commonly called fax, has been increasingly used during the past 30 years to transmit text, photographs, and drawings. In its original form, fax uses a transmitting device to translate text and graphics on paper into electrical impulses sent via telephone lines to a receiving device that decodes the signals and prints a facsimile copy. Since the 1990s, it has been possible to transmit the contents of word processing, spreadsheet, and graphics files directly from one personal computer to another via telephone modems, which now typically include fax transmission and receiving capabilities, without the need for dedicated facsimile machines.
The period since the early 1980s has seen significant advances in the creation and distribution of electronic documents over telegraphy, teleprinting, and facsimile transmission. Most of these are directly related to the computer technology that has now become ubiquitous. Computer-based electronic documents were not feasible until the keyboard and monitor began to replace the card punch and card reader as the most prevalent computer input and output devices in the late 1970s. These documents have become an increasingly common form of communication since personal computers started to become fixtures in the office and laboratory — and even on the shop floor — in the 1980s. In its simplest form, today’s electronic document consists entirely of text and is communicated to a single reader. For example, an engineer might send an e-mail message to ask a technician to calibrate a new piece of equipment. The document would be created, transmitted, and read using email software. But the assistant could also choose to print the message to have a tangible reminder of the next day’s priority task. This simple document has a corresponding paper counterpart and need not be created or viewed online. On the other hand, a very elaborate electronic document intended to teach secondary school physics students the basic principles of electrical currents may combine text, charts, drawings, photos, animation, video, voice narration, music, and sound effects. Such complex examples require more elaborate hardware and software for both authoring and viewing, and are usually intended for multiple readers (most often, readers who are personally unknown to the author). These more elaborate documents have no paper counterpart and must be created and viewed on-line. In addition to simple office applications such as e-mail and elaborate ones such as the multimedia lesson on electrical currents just described, electronic documents are frequently used in electrical and electronics engineering today to support products. For example, software that monitors electricity usage in the various parts of an office tower might feature online help; a mobile telephone with a builtin personal telephone directory might include prompts to assist users in entering names and telephone numbers; a high-definition television receiver might be effectively marketed with a multimedia brochure on the World Wide Web.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Electronic Document Production
But as the tools for creating complex electronic documents become more powerful and easier to use, increasingly elaborate on-line documents are becoming more common not only in selling products and training users to operate them, but also for noncommercial uses such as reports and laboratory notebooks. The advantages of such capabilities are obvious. A report of test results that combines video clips of the test with the quantitative data collected during the test arranged in tables and plotted in graphs is likely to be easier to understand for many audiences than the same report without the video. Besides their ability to communicate sound, video, and other types of information that cannot be transmitted on paper, the most significant advantages of on-line documents over paper documents are their relative cost, increased accessibility, and greater ease of updating.
Not only can an electronic document avoid the cost
of printing and paper, but it weighs significantly less and requires little or no physical storage space — significant considerations for the documents required to support equipment aboard a ship, aircraft, or space station, for example. However, the server storage requirements of online documents should not be ignored. Information in an electronic document can be located far more quickly than in the paper equivalent by using electronic search tools rather than relying on the more limited search capabilities provided by paper indexes and tables of contents. Electronic document searches, though, can often return very large numbers of “hits.” Information in an electronic document can be linked to related information in the same document or in other documents by means of hyperlinks. These electronic links between parts of a document or between parts of multiple documents allow users to retrieve related bits of information more quickly and reliably than is possible using cross-references in paper documents. Complete electronic documents or subsets of them can be automatically reformatted or adapted to alternate delivery media such as a computer or mobile device screen, fax, voice synthesizer, or printer if content management and single sourcing have been implemented for the product information. When software is updated, a revised version of the documentation can be automatically installed on the user’s computer or other device along with the new program files, avoiding the need for users to insert revision pages in loose-leaf manuals or to replace entire volumes on their shelves with the latest update. In addition, the Web, intranets, and portable media allow other businesses to make revised versions of documents available quickly and easily and can help ensure that all users always have the most current version of a document available. Note that some international, national, and corporate standards require electronic delivery of all documents to reduce deforestation and waste paper.
But even though on-line documents offer many advantages, it is unlikely that they will ever entirely replace paper.
The very best computer monitors available today offer lower resolution — and therefore lower legibility — than that of the typical laser or inkjet printers. Electronic documents are also usually less portable than paper documents and are sometimes not accessible — an on-line manual is not much help if the computer won’t boot. Users cannot easily determine the size and amount of reading time required upon opening some electronic documents because they lack the physical bulk associated with the amount of content of a paper document and do not include indicators of length. And some users are still intimidated by electronic documents just as they are intimidated by the computers that provide access to the documents. This problem is increasingly less common each year, however. Approach of This Article In the remainder of this article, the terms electronic document and on-line document will be used interchangeably primarily to refer to more complex, elaborate documents, although most of the information presented here could be applied to even the simplest such documents. Also, because the computer hardware and software industry changes so rapidly that references to specific tools or technologies would be outdated before this encyclopedia is published, the primary focus of this article will be on principles for developing effective on-line documents rather than on the tools that might be used to build, deliver, or read them. The principles described in this article are derived from theory and research on effective oral and written communication developed over the past 2500 years and adapted to the media, audiences, and contexts of on-line documents. These principles also reflect the methods and processes for managing successful engineering projects that have proven successful over many years. This combination of effective communication and effective project management techniques is the key to producing electronic documents that are useful to readers in an efficient and cost-effective manner. CHARACTERISTICS OF ELECTRONIC DOCUMENTS The term electronic document as used in this article refers to a communication artifact with the following characteristics:
One or more authors One or more potential readers A message consisting of text, numbers, pictures, video, and/or sound
Creation, transmission, and delivery via computer This last component requires some amplification. Most people usually think of electronic documentation in terms
Electronic Document Production
of computers, but the definition of computer should be sufficiently broad as to cover any product that uses microprocessor technology. Because much of the electronic equipment on the market these days incorporates such technology, many products created by electrical and electronics engineers can include electronic documentation, although it is not cost-effective or efficient to produce it for every product. In addition, the more elaborate types of electronic documents include other significant features.
They incorporate hypertext or hypermedia, the ability to link one element of text, graphics, video, animation, or sound, to another, whether within a single document or between multiple documents, possibly across multiple computers. They are easily searchable, enabling readers to quickly locate relevant portions of the document. Unlike many paper documents, they often have a nonlinear structure and provide multiple points of entry and methods of navigation. They sometime provide readers with opportunities for a high degree of interaction with the text and other media by responding to dialogs, making annotations, asking questions, requesting information, and so forth.
the variety of highly specialized skills needed to assemble such a document. A sixth phase this is also assumed for most engineering projects, evaluating the document after its publication, provides feedback that the documentation team can consider when planning a new release of the document. Assembling the Team Identify the Skills Needed. As with any other engineering task, the success of an electronic documentation project depends in large measure on assembling team members with the right mix of talents and previous experience to ensure that the project achieves its goals. No sensible engineering manager would consider undertaking the development of new fiber optic technology without the assistance of staff members who have the appropriate expertise to work on the project. Similarly, building a complex on-line document requires a broad range of skills.
Technical writers and editors with knowledge of the
All these characteristics make it possible to communicate much more effectively and to transmit many more types of information with electronic documents than is possible using paper documents. Electronic documents not only make it possible to provide readers with enhanced information, but they can also facilitate collaboration and learning, and they can support performance in ways that are impossible with paper documents. For example:
Mechanics can learn how to maintain complex, expen-
sive equipment from animation or video.
Students can master foreign languages through computer-based training modules that include videos of dialogs between native speakers, vocabulary lessons that combine pronunciation clips with graphics illustrating the words, and unit mastery tests. Software help systems can guide users through seldom-performed tasks by prompting them for the needed information and almost completely automating the task.
DEVELOPING ELECTRONIC DOCUMENTS In most respects, the process for developing on-line documents parallels that for developing print documents — or most engineering products, for that matter. There are four broad phases: analyzing, designing, constructing, and testing. To these four phases should be added a fifth: assembling a team with the expertise necessary to build highquality multimedia electronic documents. This step is assumed in most engineering tasks, but it should be an explicit part of any on-line documentation project because of
3
proposed document’s subject matter should form the core of the team. They can contribute both the precision of expression needed in any technical document with the ability to write concisely and to structure electronic documents effectively. They can also ensure continuity of language, style, fonts, colors, and other elements of the document. A human factors specialist can help ensure that information is clearly presented to the target audience through the selected media by choosing the basic metaphor to be used in the design and presentation of information for the target audience. An information designer can determine how to chunk and tag information for content management database storage and reuse. A translation expert (either an in-house translator or someone who coordinates with outside translators) can anticipate the problems and requirements for translation of the document into other languages and localization of its content for other cultures. Artists, illustrators, and videographers can add a professional touch to the visual components of the document. An experienced on-line document indexer can help ensure that search strategies are properly designed to make information easier for users to locate. Musicians, audio engineers, and audio mixers may be necessary if the document will contain a significant amount of sound. Programmers may be needed to write or modify code for the project, even if the latest automated tools are used. Systems managers may be needed to support projects that will reside on networks or multi-user computer systems. Technicians with expertise in traditional and multimedia computer hardware can assist with configuring equipment.
4
Electronic Document Production
An accessibility specialist can ensure that the document complies with government and industry accessibility requirements and guidelines. Usability specialists who are experienced in evaluating the usability of on-line documents may be helpful during the testing of the document to help the project team determine how effectively it meets the audience’s needs. Users of the planned document can provide essential input, especially during the analysis, design, and testing phases. An experienced project manager — preferably one who has worked on previous multimedia projects — will ensure that the project meets the customer’s and users’ needs and comes in on time and at or under budget. Even though the need for such specialists will be obvious to experienced multimedia project managers, it may not occur to those embarking on their first significant electronic documentation project. It is easy to be misled by the hype surrounding the tools available to create multimedia documents. Even a tool that is easy to use does not produce professional results unless the person using it has the requisite knowledge, experience, and skill. What tool could be easier to use than a paintbrush, for example? But there are relatively few who can produce a professional-looking portrait using that “simple” tool. The people in these roles need not be full-time or dedicated to the on-line document project. Occasionally, a single individual may possess skills in more than one of these areas, but the project manager should be wary of equating casual familiarity with expertise. Just because some team members have used a camcorder to make home videos does not mean that the project can do without the services of a professional videographer, for example. Although hiring specialists to produce electronic documents may seem needlessly expensive on first glance, experts work significantly more quickly and produce higherquality results than amateurs. This is especially important when the document either is part of a product or otherwise supports a product that is offered for sale, but it can also be a substantial factor when the document is intended solely for internal use within a company. Employees who use an on-line document for training are less likely to take an amateurish effort seriously and to benefit from the information it contains than they will one that results from the professional collaboration of experts. Deliverable: Documentation Project Team Organization Chart. Identifying the skill set needed for a project and recruiting staff with the necessary background usually spans the entire analysis and design phases and may extend into the construction and testing phases as well. But at the outset, the documentation project team manager should begin building the organization chart for the project. At the very beginning of the project, the documentation team manager should examine the list of skill areas and identify which will definitely or likely be needed on the current project. After these skills have been identified, the
manager can then start identifying or recruiting personnel for those positions. For example, most on-line documentation projects will require at least one technical writer, one editor, and one illustrator. The project manager should identify someone to fill each of these positions, and recruit them for the project team as soon as possible. For larger projects, these people will probably be the writing, editing, and illustrating leads, and will eventually be guiding the efforts of others who will assist with those functions. The project manager of an electronic documentation team must ensure that the team works together effectively. The manager should recognize that the team members have an unusual mix of skills and may not be accustomed to working together. Both the manager and other team members must understand the need to compromise their individual desires for the good of the team and project — for example, the video experts may not be able to produce “perfect” movie clips because of file size limitations imposed by the programmers. As the project proceeds through analysis, design, construction, and testing, the documentation manager monitors the tasks that must be performed and the staff required to do the work within the schedule and budget constraints for the project. Additional functions are added to the organization chart as needed, and personnel are identified to perform those functions. Specialty skills such as videography, audio mixing, and photo editing may require outsourcing, so it is essential to budget with these needs in mind. Analyzing the Purpose, Audience, and User Environment The first step in developing any new engineering product should be to answer three questions:
Why is it needed and what is it intended to accomplish? In other words, what is its purpose?
Who will use it to accomplish that purpose? In other words, who is its audience or user group?
What setting will it be used in? In other words, what is the typical user environment? The answers to these questions determine how it will be designed, constructed, and tested in subsequent phases of product development. Purpose and audience are inextricably linked, so it is impossible to analyze one in isolation from the other. The readers’ needs are the central focus in defining the document’s purpose, and the document’s purpose must likewise be considered when performing the audience analysis. Determine the Document’s Purpose. All documents serve the same basic purpose, to convey information from the author to the reader. But what kind of information is conveyed and how the reader will use it differ significantly from one document to another. There are four major types of technical information.
Procedural information walks a user through the steps needed to use a product to accomplish a specific
Electronic Document Production
task (e.g., the sequence of steps required to print a large engineering drawing using computer-aided design software). Conceptual information provides the background that users need to understand the context of a procedure (e.g., an explanation of why animation is a better choice than video to demonstrate how to insert a microprocessor chip). Reference information helps users make decisions and is meant to be consulted repeatedly, not to be learned (e.g., the list of part numbers for items that might be included in an inventory report or inquiry). Instructional information provides users with the data they need to perform tasks as well as the background they need to master those tasks and perform them routinely (e.g., a tutorial describing how to embed graphics files containing circuit diagrams in a technical report). Two or more of these types of information are often mixed in a single document because one document must frequently meet the needs of a range of users and situations. Such documents might contain procedural information to step some readers through a task for the first time as well as conceptual information to help more experienced readers understand why they should choose one procedure rather than another similar one. The next step in analyzing the document’s purpose is to define at a high level the tasks that the document should support. This is not intended to be a detailed list of all the functionality that the document needs to describe but rather a general list of function areas that the document will cover. For example, the on-line help for a simple text editor might need to address five high-level tasks: entering, editing, formatting, outputting, and saving text. Knowing the kinds of information that readers will need and the tasks that they will need to be able to perform will help the documentation team determine both the content and the organization of the document, as well as such elements as navigation and searching strategies. Without the information provided by a thorough analysis of the proposed document’s purpose, the success of the resulting document cannot be predicted. Analyze the Prospective Audience. It is essential to examine the audience in as much detail as possible before beginning work on the document. The team should collect at least three kinds of data about their prospective readers. Compositional Data. Is the target audience homogeneous or heterogeneous? In what significant ways are they alike or different? What are the relative proportions of each component population? Sociological Data. What are the prospective readers’ ages, education levels, reading abilities, comfort with English (or other target language), national origins, and other social characteristics? Do audience members have visual, auditory, motion, cognitive, or learning disabilities that must be accommodated?
5
Subject-Matter Data. What previous experiences do the readers have with the subject matter of the document? Where do they stand on the experience scale from novice to expert? What are their interest levels in and general attitudes toward the subject? If their prior experiences have been negative, is it possible to determine the reasons for those problems? Are they likely to be open to new experiences with the subject matter? The answers to all these questions will help the project team better understand the backgrounds of its prospective readers and therefore communicate with them more effectively. A richly detailed description of the target audience resulting from this analysis will enable the project team to select the best words, writing style, depth of coverage of the subject, examples, tone, and supporting media to use in transmitting information to these readers. Describe the Typical User Environment.. Where the document will be read can sometimes have a dramatic impact on its design. The documentation project team will not always be able to provide a detailed description of the user environment, particularly when the potential user group is very large and diverse. But it is generally helpful to consider the following questions.
Where will the typical user read the document? In
an office or cubicle? On the shop floor? At home? In school? Outdoors? In a public or semipublic space? How much time do users have to view the document? Do they need to obtain information quickly to answer questions or solve problems, or do they have more extended time to devote to viewing the document? Is this time structured (programmed into the workday) or ad hoc? What are the typical lighting conditions in the user environment? If the user group is very large or diverse, what is the range of lighting conditions in which the document will be used? What sounds or noises are typically present in the user environment that might affect user concentration or ability to hear sound used in the document? What limitations does the environment impose on use of sound in the document? Does the user environment impose any special restrictions or limitations, such as the need to preserve a sterile field (as in an operating room) or to avoid potential contamination by radioactive, chemical, or biohazardous materials (as in a laboratory)? If the document will be read on the user’s computer, what are the typical users’ hardware and software platforms? What operating systems and releases? What processor chips and speeds? What monitor sizes and resolutions? How much random-access memory (RAM)? What hard disk sizes? Can they read portable media (CD-ROM, DVD, USB flash drive, memory card, etc.)? If so, what kind(s)? Do they have sound cards and speakers? Do they have advanced video cards? If so, how much video RAM do they have? How and at what speed do they connect to the Internet or to a corporate intranet?
6
Electronic Document Production
What limitations or restrictions (security or other corporate policy) exist regarding changes to the user’s hardware and software? Will the project target the user’s existing computer infrastructure or can it require upgrades? How much infrastructure can be supplied with the electronic document in the form of fonts, audio or video software, document readers, and so forth? The answers to these questions will not only help the project team to understand the environment in which the document will be read but will also help them to make decisions later in the development process about supporting media to be incorporated into the document and about tools to be used for delivering, reading, and updating the document. Analysis Phase Deliverable: The Information Plan. Using the data about the document’s purpose and audience and about the users’ environment gathered during the analysis phase, the project team is ready to assemble a document information plan. The plan should include the following:
Document purpose statement Audience description Information types the document should contain (list with brief explanations)
High-level descriptions of user tasks the document should support
User environment description Preliminary documentation project schedule Revised documentation project organization chart The more elaborate the document envisioned, the more elaborate the information plan should be. The information plan for a simple on-line document might require little more than a page; very complicated documents will require much more complex information plans. The information plan should be reviewed by the entire documentation project team, and it should also be reviewed and approved by the product development team’s management to ensure that it conforms to the analysis and schedule for the product-development project. Designing the Document After the team has determined the document’s purpose and collected information about the intended audience and their environment, the design phase of electronic document development can begin. In design, the team will consider schedule, budget, and staffing constraints; choose the appropriate type of on-line document to build; define the topics it will contain; decide on the navigation sequence and search methods to be offered; identify the hypertext and hypermedia links to be included; identify the supporting media that will be used; and finally select one or more tools for developing the document. Consider Budget, Scheduling, and Staffing Constraints. Although the project manager will be concerned with
budget, schedule, and composition of the project team from the very beginning of the electronic document project, these elements become crucial considerations in the design phase. The time, budget, and human resources available to the project are matters that must be weighed carefully before the document is built. Because the technology and capabilities of electronic documents can be quite seductive, it is important to balance what the project could deliver if resources were unlimited against the realities of time, money, and staff available to do the job. Many on-line engineering documents are either embedded in the products they support or must be delivered simultaneously with those products. The schedule for such documents must thus be incorporated into the overall product schedule. Specific operating instructions, for example, cannot be written before the product specifications that describe its functionality are approved. Final video clips depicting product assembly cannot be shot before a prototype of the product is built, but a mock-up clip can be supplied in a prototype or test document and replaced with the final clip in the last stages of document production. In short, the design and construction of the document must parallel — and usually lag a step or two behind — the design and construction of the product it supports. Careful coordination with the overall project schedule is vital at all stages. The documentation team and the rest of the product development team must work together closely to ensure that all aspects of the product’s development — including the documentation — are completed in a timely manner, and that minimal rework is required. Similarly, the budget for the document must fall within the allowable limits. An elaborate electronic document that includes several types of information and multiple supporting media will ordinarily require more staff with greater expertise, more elaborate equipment, and thus a larger budget than a less ambitious project. Although documentation teams strive to prove that they add value to the product rather than simply being part of the cost of bringing it to market, and although more elaborate documentation may make the product significantly easier to use, the documentation project manager must weigh the benefits of increased ease of use against the realities of the overall project cost. The project manager must also identify proposed use of expensive technology, such as video or custom photography, and approve its use only when it adds significant value to the document. Finally, the staff available to work on the electronic document must be realistically considered. A documentation manager must frequently allocate limited staff to multiple projects. If the human resources needed for an electronic documentation project are not available within the company, then the budget and schedule must allow time and money to recruit that talent, either on a contract or permanent basis, or outsource those tasks. The manager may also consider using clipmedia or reusing earlier work (previous versions of the same or similar documents) to reduce the need for specialized skills or larger staffs. At the design stage, the project manager must make a final determination of the skills needed, the personnel currently available, and the likelihood of acquiring additional resources with the
Electronic Document Production
necessary experience and skill if budget and schedule permit. Select the Appropriate Document Type. Using the information gathered about the electronic document’s audience and purpose, the first step in the design phase is to choose the type of document that will best serve the audience’s needs and accomplish the project’s purpose. The following document types include those most commonly produced when this article was written. On-Line Books. On-line books (including such diverse types as encyclopedias, manuals, and reference guides) typically provide reference information about a product or other subject. Users consult this type of document to obtain further information about a topic and to understand the context in which certain product features would be used. On-line books are the closest equivalent to paper documents, and paper reference manuals can often be successfully and easily translated to electronic form. Brochures, Kiosks, Demos, and Guided Tours. These document types are primarily marketing tools that typically describe one or more of a company’s products or services. They may be self-running, in the fashion of a slide show, or they may require reader interaction. These types of electronic documents are often found in stores and trade shows, but they are also sometimes distributed through the mail on portable media or published on the Web. Some documents of this type are informational, rather than promotional. Informational brochures, kiosks, demos, and guided tours are also found in museums, historic sites, and other public places to explain or interpret displays or simply to assist the user in locating a desired destination. Occasionally, these document types are also used as stand-alone instructional tools. Readme Files. Readme files typically supplement paper documentation by providing information about a product that was not available when the paper documents were prepared or that has changed since the paper documents were printed. Although many readme files are plain text with minimal formatting, Web browsers, operating systemspecific help engines, and “digital paper” products that reproduce the look and feel of paper documents make possible more complexly formatted readme files incorporating hypertext and multimedia. Messages. Messages are embedded within a product and inform the user about the status of a task or about problems encountered in completing a task. They are usually very brief, but the best messages are as complete and selfcontained as possible. For complex conditions when complete information about resolving the problem cannot be provided because of space limitations, the message should offer advice on where to find additional information in paper documentation or allow the user to summon the help facility for further detail on resolving the problem. Help. On-line help is intended to assist users in operating a product (usually a software package) and is almost
7
always embedded within that product. Help is typically consulted when a user experiences an operational problem and is uncertain about what to do next. The help facility’s ability to anticipate the task the user is seeking assistance with is called context sensitivity. The degree of context sensitivity may vary depending on the capabilities of the tool used to author or deliver the help, on the capabilities of the product being documented, or on implementation choices made by the documentation team. For example, some help facilities are designed so that they take the reader to the topic corresponding to the field or other area on the screen where the cursor is located when help is requested, whereas others will take the reader to a menu where a topic can be chosen. Computer-Based Training. Computer-based training (CBT) typically includes one or more lessons or modules combining instruction, drills, reviews, and exit testing to certify the user’s competence on a subject. Because most implementations allow users to progress at their own pace and learn on their own schedules, CBT is an excellent substitute for “stand up” instruction on many topics when a large number of people must be trained. The results of module pretests may allow users to bypass instruction on topics they have already mastered. Because it is now possible to deliver computer-based training directly to the desktops of users with multimedia-ready workstations, CBT is a much less expensive option than it once was. Computer-Based Reference. A computer-based reference tool allows readers to look up seldom-used or widely varying information. For example, such a document might display the correct postal code for a user-specified address or list the manufacturer’s contact information for a userspecified part number. Computer-based references are often interfaces to databases that might not be considered “documents” in the usual sense of the word. When the audience for the electronic documentation project is diverse and the project has multiple goals, it is usually advisable to create two or more types of document. For example, many software documentation projects include readme files, on-line help, and a computer-based reference. The time, staff, and budget available for the project will determine the feasibility of including various document types that might be called for based on the analysis of purpose and audience performed in the project’s analysis phase. Define the Topics. Topics are the building blocks of online documents, just as chapters and sections are the building blocks of paper documents. In designing an electronic document, the project team must identify all the topic components of the document, ensure that the topics are sized for optimal effectiveness, and take care that the topics approach the subject from the reader’s perspective. The first step in defining topics for the on-line document is to review the list of high-level tasks in the information plan. The documentation team analyzes these tasks and develops a detailed list of user tasks that the document will support and then classifies each of them according to the type of information the user will need to perform the task.
8
Electronic Document Production
The team then develops a list of topics that the document will provide to support users in performing each item in this detailed, classified list of tasks. The documentation team should resist the urge to pack too much information into a single topic. Each topic should be an easily understood chunk of usable information, not a mass of data. Each topic should answer a single question, such as What are the steps in this process? or When is this process performed? In sizing topics, the team should remember that the attention span for readers of on-line documents is 10 to 15 min at a sitting compared to 30 to 60 min for readers of paper documents. In terms of physical appearance, depending on the tool used to read the document, a topic can be defined as a single screen, panel, or page of an on-line document. Ideally, especially when the document is being designed primarily for online reading, the reader will be able to see the entire topic on screen at one time without having to scroll the display. Sometimes, however, the amount of information comprising the topic makes minimal scrolling unavoidable. Designing topics requires the juggling of two apparently contradictory goals. To present topics without requiring readers to scroll, the documentation team must carefully define each topic in such a way that the completed topic will contain only essential information or in a way that a larger topic can be broken down into two or more smaller topics. At the same time, the team must ensure that the topic’s design contains all the information the reader will need to understand it. In terms of content, a topic will correspond to one of the four information types discussed earlier: procedural, conceptual, reference, and instructional. It is important not to mix information types in a single topic. If one topic provides readers with the procedure for performing a task, it should not also contain a conceptual explanation of why that procedure is used in preference to another similar one. The reason for limiting a topic to a single information type is that different readers have different needs for different types of information. Because users of on-line documents (even more than users of print documents) tend to skim or browse rather than truly read, isolating information types that address similar subjects in unique topics helps ensure that users will find the information that they need quickly and efficiently. Although the term page is sometimes used to refer to a topic, especially for documents presented over the Web, an electronic page does not usually correspond to a page in a paper document. There are exceptions to this rule — as in electronic equivalents of paper catalogs, for example, that present each of a company’s products on a printed page. Most often, however, an electronic document topic should contain no more than one-third of the information on a typical printed page. When designing topics for an electronic document supporting a product, the project team must ensure that the topics are user-centered, not product-centered. Consider the on-line help for word-processing software, for example. From the documentation team’s perspective, it might appear that the most useful way to define topics would be to address the various menu selections a user can choose. This is a product-centered approach to topic design. The
user-centered approach addresses the tasks that a user can perform with the product. Instead of a topic on the Tools Envelopes and Labels menu selection, the documentation team should define a topic on how to address envelopes. Although this approach might seem to apply only to the title of the topic, it goes much deeper. A product-centered document focuses on the product’s functionality. A usercentered document, on the other hand, focuses on how the product helps users perform common tasks or solve common problems. Although it may be effective to market products by listing and briefly describing their capabilities, users don’t typically learn functions. Instead, they learn how to solve problems or perform tasks they need to do using the product. Decide on Navigation Sequence. Unlike many paper documents, electronic documents are not usually organized in a linear fashion. Readers of nonlinear on-line documents do not start at the beginning and read through the document until they reach the end. Instead, they access the document at the point that introduces the information they need, and they read only the relevant portions of the document. There are four basic types of navigation or browsing sequences for on-line documents: linear, grid, hierarchy, and web, as shown in Fig. 1. In a linear sequence, the user is intended to navigate through the topics from the beginning to the end, much as in the typical printed document. The information is organized from beginning to end (chronologically, as in a narrative), from general to specific (logically), or in some artificially ordered way (alphabetically by topic title or numerically by some uniquely assigned identifier). The linear sequence is best used to meet a defined outcome, such as stepping through a multipart procedure with a fixed order. A grid sequence resembles the linear but adds complication. In addition to navigating from left to right in the grid, readers can also choose to move up and down. This greater flexibility allows grid navigation schemes to serve more diverse user groups. For example, a grid browsing sequence can allow users who are reading a procedural topic to move to a related conceptual or instructional topic quite easily if they wish to obtain a different slant on the material. Grids are also ideal for presenting similar categories of data about multiple subjects. Thus, the grid sequence allows users to explore a topic in more or less detail, depending on their prior knowledge or experience. A hierarchy sequence resembles an organization chart. It usually consists of a single topic at the top of the hierarchy, which branches to multiple topics at the next level, each of which may branch further at subsequent levels. The result resembles the menu structure of many software applications, and this scheme can be an easy way to provide reference information about a product. This structure is most effective when the information is highly organized. The hierarchy sequence allows users to drill down to locate the precise information they need to answer questions or perform tasks, but it sometimes requires a greater degree of knowledge of or experience with the subject area than does the grid sequence. Finally, the web sequence is the most complex, resembling a spider’s web. Here, the user may move from one
Electronic Document Production
9
conceptual information, whereas another takes the form of a grid to accommodate the requirements of more advanced users who need reference information. Determine Information Access Methods. After the documentation team has defined the document’s organizational structure, it must then address how readers will find the information — how readers will navigate to the first topic in a browsing sequence. Context-sensitive on-line help provides one access method for that type of document. Document tables of contents, indexes, and searching capabilities provide other ways for readers to find the information they need. Context-Sensitive Help. Most software applications on the market today allow users to summon help in performing tasks by pressing a key or clicking an icon. In response, the software displays a help topic that addresses the screen the user was working on and sometimes the field where the cursor was located when help was requested. This capability is possible by linking the software code to the help file by means of “context strings.” The documentation team must work closely with the product developers to ensure that context strings are used consistently in the programs and in the help.
Figure 1. The topics in an electronic document can be structured in four ways: linear, grid, hierarchy, and web. Each major part of a large, complex electronic document can have its own organization.
topic in the web toward related information at any connected node. The web organization is the least structured and is effective for presenting large amounts of information that is only tangentially related. The web sequence allows the most opportunities for user exploration, but it also provides more chances for users to get lost. The simpler the organizational structure, the easier it is for users to predict what information they will find when they navigate to the next topic; the more complex the organizational structure, the easier it will be for users to get “lost in hyperspace” and become distracted from the job at hand. But the predictability and ease of use of the simpler structures also incur a handicap. The simpler the navigational structure, the less easy it is for readers to understand the connections between related topics. For large, complex on-line documents that are intended for a diverse audience of users, it is possible to combine several of these structures. One major part of the document might consist of a linear or hierarchy structure designed to meet the needs of beginners who need procedural and
Document Tables of Contents and Indexes. Good electronic documents have tables of contents and indexes that allow users to locate topics. Just as in a printed book, an electronic table of contents appears at the beginning of the document and reflects the overall organization of the document. The electronic index, much like a paper index, is an alphabetical list of concepts presented in the document. Hyperlinks in both the table of contents and index allow readers to move directly to the topics they want. As with paper document indexes, it is important that an electronic index allow readers to access information not only using the terms found in the document itself but also using synonym terms. Search Capabilities. The electronic index gives the user the ability to search the document for the keywords chosen by the indexers, but keyword searches often do not allow readers to find information they are looking for. Depending on the tools chosen to author and deliver the document, other types of search capabilities may be available, including full-text and Boolean searching. In some cases, such as Web documents, search capabilities native to the reading tools can be extended with scripts or external programs that allow additional searching capabilities. When electronic documents consist of more than a single file, the documentation team needs to consider the problems that can result if the reading software’s builtin search mechanisms allow users to search only one file at a time. In such cases, the information readers are seeking may be available in other parts of the document, but readers may not be aware of that fact. In such cases, it is advisable to determine a way to extend the search function to span the entire document or provide readers with information about how to look elsewhere if they do not find what they are attempting to locate.
10
Electronic Document Production
Identify Links. The links in a document not only implement its navigation or browsing sequence (navigation or browsing links) but also connect points within one topic to points in the same or other topics (hyperlinks). Because navigation links have already been addressed, this section will consider hyperlinks. Not only text and graphics, but also sound, video, and animation can be hyperlinked. The term hypertext refers to the linking of text only, whereas hypermedia refers to the linking of any electronic document medium to any other. Hyperlinks in an electronic document should add value to the electronic document by helping users find significant additional information such as
Definitions of unfamiliar terms Photos or drawings of equipment Video or animation of a process Confirmation or cautioning sound effects Music to illustrate or reinforce a point Voice narration References to related information
Because of the time required to create, test, and verify hyperlinks, as well as the need to minimize user distraction, the documentation team should avoid providing more than two or three links within a topic. They should enhance the information contained in the topic, not be necessary side trips to obtain all information on the topic. Additionally, the documentation team should develop standards for implementing hyperlinks within a document. Although most computer users today recognize that a blue-or green-shaded word or phrase with a dotted underline, or a button indicates a hyperlink, they may not recognize graphics or icons that serve as links, or links indicated with nonstandard colors or other cues. The cues should be carefully established and consistently implemented throughout an electronic document, and they should be made obvious to users. Links from one document that take readers to another document require special care in implementation and maintenance. The documentation team should take care that all the documents needed for links to work get correctly installed to avoid inoperative links when users choose them. Furthermore, if the electronic document is implemented on the Web or a corporate intranet and contains links to external documents such as other sites on the Web that are outside the documentation team’s control, the links should be monitored to ensure that they do not become stale. The decision to use relative or absolute paths to specify the locations of linked files is complex and requires careful execution. Relative paths are generally the safer alternative, especially since the absolute locations can vary significantly from prototype to test to production versions of the document. Determine Supporting Media. In most cases, sound, video, and animation support the message communicated by text and graphics but do not bear the brunt of the communication burden. That is, the sound effects, music, narration, dialog, animated sequences, and video that are
incorporated into electronic documents are generally not essential and can be eliminated without significant effect on the amount or significance of the information being conveyed to the user. There are obviously exceptions to this rule. The sound clips accompanying a training module on the significance of various safety alarms communicate much more effectively and clearly than the verbal descriptions of those alarms. Sometimes a sound effect, a picture, or a video is worth far more than 10,000 words. When these media play a defining rather than supporting role, however, it is essential to provide alternate media for those with visual or hearing disabilities, or those whose environments may not permit them to play the media. Media other than text are also relatively expensive to produce, require more elaborate user hardware, and can require significant storage space. For example, not only does high-quality video require installation of plug-ins to allow execution as well as the services of a professional camera and sound crew, lighting, and editing, but it also typically requires “talent” — people who act out scripts that have been prepared in advance. Add to these costs the overhead incurred by storage space requirements on the user’s hard disk, the competition for space on a CD or Web site, the network bandwidth required to download the file from a network server, or the contention for processor time on the user’s workstation, and the costs become even more significant. Animation, sound, photography, and illustrations are usually significantly less expensive than video, but a document that will incorporate large quantities of these media can still prove costly. Some of these overhead costs can be reduced. For example, the storage and bandwidth required for full-motion video (30 frames per second) can be significantly lowered by sampling the video at a rate of 15, 10, or even fewer frames per second. Even though the difference is perceptible, it is less noticeable and is generally acceptable to most users in some situations (e.g., where video portrays “talking heads” — individuals speaking while seated or standing at a podium, and making relatively few movements). When the documentation team decides to incorporate essential or highly important multimedia data in a document, it is vital to alert readers in a conspicuous way in every topic where those essential media appear. This practice will ensure that readers whose workstations do not support those media, who have disabilities that prevent them from seeing or hearing those media, or whose environments do not permit playing those media will not miss important information. In these cases, it is advisable to provide a text and graphical summary of that information, either directly in the topic or by means of a link to a supplementary topic. Finally, if a document will incorporate multimedia, the documentation project team must be aware of the copyright requirements that protect the creators of these media just as they protect creators of books and articles. The documentation team should avoid reproducing anything in an on-line document — text, graphics, photography, animation, video, music, or sound effects — that are not clearly in the public domain or for which they have not paid a royalty.
Electronic Document Production
Select the Tools to Be Used. Except in rare situations, the project team should not make any assumptions about the tools that will be used to create, distribute, and view an online document until the very end of the design phase. The reason is a basic one: a tool cannot be selected wisely without first performing the required analysis and design steps, and identifying
The document’s purpose and audience The kinds of information needed The users’ environment The user tasks that the document needs to support The type of document to be created The topics to be included The navigation sequence and search methodologies to be employed The number and types of hyperlinks to be used The supporting media to be used The project’s budget and schedule constraints If tools are selected prematurely, the choice may not deliver the needed information effectively, have the needed functionality, or be usable by the intended audience. For example, to assume that a document will be created with sophisticated multimedia authoring software is not realistic if the project manager is unable to recruit staff members who are sufficiently familiar with the tool to meet the schedule. Similarly, to take for granted that a document will be delivered as a help file supported by one operating system is foolish if the audience has a significant number of users whose workstations run a different operating system. The best way to proceed, then, is to wait until the very end of the design phase to select the tools and follow these guidelines:
Select authoring tools that best conform to the document that has been defined and can be competently handled by the staff available to perform the work. Avoid the temptation to select a tool that offers capabilities that users don’t need or that require staff talents that aren’t available. Choose a viewing tool that is appropriate to the user base. Avoid the temptation to select a tool that users are unfamiliar with or that will require extensive training for them to use effectively. Similarly, be certain that the tool selected is compatible with the users’ existing hardware and system software (or planned upgrades). Pick a delivery technique that is appropriate to the type of document, the product it supports, and the needs and capabilities of the intended users. Avoid placing a burden on the user by making delivery as transparent to the user as possible; embed the document in the product it supports or otherwise install it automatically. Prototype a small segment of the document in the selected tool. The prototype will help you identify the tool’s limits (e.g., supported levels of hierarchical links
11
and multithreading of topics and files, font portability between operating systems, and so forth). Be aware of the cost of licensing run-time versions of nonstandard viewing tools for each user of the document. Above all else, be conservative in choosing authoring and viewing tools as well as delivery method. This approach will not only save the project money and time but will also usually result in a more effective document from the user’s perspective. Many on-line documents need to be accessible to readers who use multiple hardware and software platforms. The easiest way to ensure cross-platform compatibility for the commonly used operating systems is to rely on Web browsers as the user’s reading tool. Because browsers for each of these platforms can interpret the same files on Internet servers, intranet servers, local area networks, and the individual user’s machine, they provide a great deal of flexibility. Since virtually all users today already have Web browsers installed on their desktops and because browsers are easily available for download for those who don’t already have them, they are often the tool of choice. Nevertheless, the electronic documentation team needs to consider cross-browser compatibility issues as well as various browsers’ support for media, and they must monitor market penetration for screen resolution and dial-up versus broadband connectivity for the target audience. Other tools in common use as this article went to press include the following:
Operating system online help engines — The continuing evolution of the Windows and Macintosh operating systems, however, mean that a single help file may not be compatible with all users’ workstations, depending on the versions of the operating system they run. “Digital paper” tools — Software such as Adobe Acrobat reproduces the look and feel of existing paper documents and adds hypermedia and on-line indexing and searching capabilities.
Design Phase Deliverable: Document Design Specifications. The document design specifications describe in detail the content and organization of the entire electronic document. As with the information plan prepared at the end of the analysis phase, the more elaborate the document envisioned, the more elaborate the design specifications need to be. Two major components are suggested. Story Board. It is helpful to construct a story board consisting of one panel for each topic the document will contain. Each story board panel should include
A unique alphanumeric topic identifier for this topic The topic title that is grammatically parallel with the titles of other similar topics (For large projects, it is helpful to develop naming conventions for the titles of various types of topics.)
12
Electronic Document Production
A descriptive outline of this topic’s contents, includ-
ing cross-references to pages in the product’s design specifications that describe the functionality to be addressed in this topic The file name (including full pathname) for each text, graphic, sound, animation, and video source file that will be built for this topic (For large projects, it is helpful to develop naming conventions for the filenames of various types of files.) Warning: Links cannot be maintained if files are moved or their names are changed The search terms to be used in building any index that will help users locate information contained in this topic For help topics, the product code routines and parameters that call this topic when help is invoked The unique identifiers of all topics from which this topic can be accessed, both through the planned browsing sequence and through embedded links The unique identifiers of all other topics that can be accessed from this topic, both through the planned browsing sequence and through embedded links The name(s) of the person(s) assigned to build each text, graphics, sound, animation, and video source file for this topic The name(s) of the person(s) responsible for factchecking, editing, and testing each source file for this topic The deadline dates by which each source file must be built, fact-checked, edited, and tested for this topic
Figure 2 shows a sample story board panel for an electronic document topic. Document Diagram. In addition to the story board, which provides a detailed view of each topic of the proposed document, it is helpful to construct an overall document diagram that shows each topic and its position in the intended browsing sequence. If space allows, additional lines can be drawn between topics to depict embedded links. This diagram gives the documentation project team a bird’s-eye view of the entire document. Topic Templates. To ensure consistency and minimize rework, it is helpful to prepare a template that indicates the arrangement and formatting of information for each type of topic. Depending on the types of information the document will contain, the design specifications may include a procedural topic template, a conceptual topic template, a reference topic template, an instructional topic template, and a home or menu topic template. Revised Documentation Project Schedule and Organization Chart. The project team needs to revisit the schedule and organization chart for the documentation project at the end of the design phase and make any revisions needed. The information on the project schedule should be as detailed as that on the story board to allow the teams to update the status of each task as the various document components are constructed.
As the project continues through the construction phase, the story board, diagram, templates, project schedule, and organization chart need to be maintained to keep them up to date and useful to members of the project team — and especially to assist the documentation project manager in tracking the project to ensure that the team has the necessary resources to deliver the project on time and at or under budget. As with the deliverables at the end of the analysis phase, the design specifications should be reviewed by all members of the documentation project team; they should also be reviewed and approved by the product development team manager to ensure that the documentation effort is in line with the overall product development effort. Building the Document If the electronic document has been designed according to the method advocated in this article, the project team will find that building the document is an easy task because the specifications contain all the information necessary for the writers, illustrators, animators, sound engineers, and videographers to produce the necessary parts and for the document production staff to assemble those parts. Creating and Assembling the Parts. Using the document design specifications, the documentation project team members create the components for each topic; assemble and format the components of each topic; create the index entries for each topic; create the links between topics; and then assemble and compile the entire document.
Each story board panel provides the documentation project team with the data needed to write the text, draw the illustrations and animations, record the sounds, and produce the video required for that topic. When each of these files is created, it is named and stored in accordance with the file names and locations specified on the story board. As each file for the topic is created, its status is updated on the documentation project schedule. The story board panels also contain the information required to create both browsing sequence links and embedded links from one topic to another. As the links for each topic are completed, their status is updated on the documentation project schedule. Note that linking is a bottom-up process. The file or topic to be linked to must exist before the link will work. Good design can provide the filename, but the file must be created before the link can be tested. The story board also contains the information needed to index and create search terms for each topic in the online document. As the search terms and index terms for each topic are created, their status is updated on the documentation project schedule. The document topic templates provide the document production staff with the information needed to format and assemble each topic as specified by the story board panel. As the assembly and formatting of each topic are completed, their status is updated on the documentation project schedule.
Electronic Document Production
13
Figure 2. This sample topic story board, a part of the document design specifications, provides all the information the documentation team needs to create the topic.
The document diagram provides the documentation team with guidance on the shape of the entire document and the links between its major parts. All the design phase deliverables provide the project manager with the information needed to track the progress of the project. By monitoring the status of each task on the documentation project schedule, the project manager can determine which parts of the document are proceeding according to or ahead of schedule and which are behind. The project manager can thus determine the cause of any problems and assign
additional staff as needed to ensure that the construction of the entire document is completed on time. When all the components of the document are essentially complete, the document may require compiling before it can be viewed with the reading tool chosen for the electronic document. Complex electronic documents that must be compiled often require multiple iterations to compile successfully. The documentation project team should allow sufficient time in the schedule to produce a clean compile before turning the draft document over to testing.
14
Electronic Document Production
Construction Phase Deliverable: The Draft Document. After the document has been substantially completed and compiled if necessary, it is ready for testing. Depending on the size and complexity of the project, there may be more than one version of the draft document, but this description will assume a single draft. It will be reviewed by the entire document project team as well as by the testing team in the next phase. The product development team management may review and approve the draft for testing separately at this point, or that review may be delayed until the end of the testing phase. As at the end of the analysis and design phases, the documentation project manager should review and update the project schedule and organization chart at the completion of the construction phase. Testing the Document During the testing phase, members of the documentation project team as well as product developers check the entire draft document carefully and thoroughly to ensure that it is complete, correct, fully operational, and otherwise meets the organization’s quality standards. Members of the documentation project team may — and indeed should — test individual topics and topic clusters before the draft is complete, but that unit testing is not a substitute for comprehensive system testing of the entire document. A test plan that briefly describes exactly what tests and checks are to be performed on each topic should be prepared. The test plan should also provide a test results form on which the tester records the completion of each test for a topic (along with tester’s initials and date), as well as any problems noted. What Testing Is Performed and Who Performs It. Depending on the document’s complexity, the project schedule and budget, and the personnel available to conduct the testing, some draft document tests may be more or less elaborate than others. At a minimum, the test plan for each topic of an electronic document should ensure that:
The text uses correct spelling, punctuation, grammar, syntax, and word usage (tested by the technical editor)
The text is legible and readable by the target audience
(tested by the technical editor and usability specialist) The vocabulary is understandable to the target audience (tested by the technical editor and usability specialist) The facts contained in the topic are correct (tested by a product development team member or other subject matter expert) Illustrations, animations, video, and sounds are correct and accurately rendered (tested by a product development team member or other subject matter expert), and that the system returns to normal operation (screen colors and cursor focus) after the multimedia event is completed Navigational and cross-reference links work correctly (tested by the technical editor)
For help, the context strings in the program code call the correct topic (tested by the technical editor and programmer) The text, colors, illustrations, photos, animations, videos, and sounds are correctly rendered on the various user hardware and software platforms (tested by the media specialists and hardware/software specialists) In addition, procedural topics can be tested by the quality assurance personnel who test product functionality, thus ensuring that the steps outlined in the document produce the specified results. Testing should be conducted on machines that closely resemble those used by the target audience because developers’ machines will be biased against independent testing. The developers’ machines usually have all software and the contexts for any relative links preinstalled, conditions that will not exist on users’ machines. How Test Results Are Reported and Resolved. Draft document testers record their completion of each test of a topic on the test results form for that topic by initialing and dating the form adjacent to the test description. They also record any problems noted on the form. When a problem is reported, the person responsible for the topic resolves it, and the revised topic is retested. This cycle continues until all problems have been addressed. Warning: Fixing one problem may cause another problem. The testing cycle must include systematic testing of areas adjacent to modifications, and sufficient iterations to guarantee that all problems are identified and repaired. During the testing phase, the documentation project manager carefully monitors the progress of the tests, notes the problems that are reported, and ensures that resources are available to assist in resolving them. Because thorough testing is crucial to ensuring a quality electronic document that will be both usable to and accepted by readers, the manager must resist the temptation to hurry testing or approve the document for publication before all problems have been resolved. This is particularly important because other product development deadlines will frequently have already been met before document testing is completed. Testing Phase Deliverable: Final Draft of the Document. When all testing and revisions to the document are complete, the documentation project team reviews the final draft of the document. It is also reviewed and approved by the management of the product development team. After the final draft is approved, the document is published. Publishing the Document After the electronic document has been built and tested, it is published, but publication does not mean the same thing for all electronic documents, and electronic publishing is quite different from paper publishing. To publish a paper document, the originator prints it and then makes it available by distributing it to the potential audience. In the case of an on-line document, however,
Electronic Document Production
the document is not printed — at least not by the originator — and distribution can take many different forms. The method of distribution must be addressed at the design phase, but the actual distribution must be monitored to ensure that the document reaches the prospective readers. Much of the best software currently on the market demonstrates the best way to distribute electronic documents: by embedding them within the products in such a way that users cannot distinguish product documentation from the product itself. Readme files displayed automatically during product installation, help files, messages, and even features of the product’s interface such as prompts for user input are all delivered transparently to the user. That is, the user does not have to think about or choose to install these documents; they are automatically installed during product installation. Electronic documents that reside on the product’s installation media (Such as a DVD or CD-ROM disc) after the product has been installed or that must be retrieved from the Internet may not always be available when users need them. When the electronic document is published, the development team must ensure that all source files and compiled libraries are archived so they are available the next time the product or its documentation is revised. USING ELECTRONIC DOCUMENTS No matter how convenient or well designed, no matter how great their potential for supporting user learning or performance, electronic documents are useless unless people actually read them and find them helpful. In the past, paper documentation has been neglected by users who are far more interested in using a product than in learning about how wonderful it is. Most users tend to learn how to use products through trial and error rather than by reading the documentation, so paper documents have often been relegated to the status of shelf decoration. Even in the nuclear industry and other situations where users are required to follow a written procedure step by step, and initial and date/time-stamp each step as it is completed, there are numerous instances of employees who have not actually used the procedure in performing the work. Is the record any better for electronic documents? Unfortunately, the answer is probably not. Despite the fact that virtually all software applications now offer electronic documentation, the user who experiences a problem or who needs to perform a task for the first time is more likely to ask a co-worker or call the manufacturer’s technical support line than to summon the on-line help. As a result, experienced creators of on-line documentation are looking at alternative ways of approaching electronic documents.
They are applying the principles of minimalist documentation to on-line as well as printed documentation. Minimalism basically endorses supplying users with the minimal amount of documentation they need
15
to get up and running with a product and encouraging users to explore and learn on their own. They are usability testing on-line documents and applying the results to the next major release of the document. They are replacing at least some of their existing on-line procedural documentation with wizards that walk users through seldom-used processes by prompting them for the necessary parameters and essentially automating the processes. Even though wizards offer less control over the fine details of performing the process, many users prefer them because the wizards allow them to get real work done quickly rather than having to learn product features. They are looking at more creative, just-in-time delivery methods to get information to users. Nontraditional resources such as wikis, blogs, and message forums may be more appealing to or appropriate for some users than traditional electronic documents. REVISING ELECTRONIC DOCUMENTS One of the major advantages of electronic documents is that they can be more easily revised, and the revisions, more easily distributed to readers than is the case with printed documents. Because the on-line document is not printed except by some users, the document’s originator has no investment in an inventory of printed copies. This fact alone frees electronic document originators in some cases to update their documents continually and make the revised versions available to their readers as they are approved for distribution. Similarly, the cost of distributing revised documents is essentially nonexistent. Revised versions can be posted on Web or file transfer protocol (FTP) sites for easy, free downloading via readers’ Internet connections. Corporate intranets provide the same capability. New releases of on-line documentation for new releases of the hardware and software products they support can be distributed with the revised product, either through the post, over the Internet, or through commercial outlets. The cost of shipping the new documentation on portable media is slight, and in most cases, the electronic documents can be included on the same media as the application or system software for the product at no additional cost. As a result of the low associated cost, document creators can issue more frequent releases of the electronic documents than would be practical with paper documentation, and users can be assured of having the most recent version of the document. In cases where up-to-date documentation is critical to safety or security, the electronic document can be made available only via the Internet or corporate intranet.
16
Electronic Document Production
BIBLIOGRAPHY
Reading List T. T. Barker, Writing Software Documentation: A Task-Oriented Approach, 2nd ed., New York: Longman, 2002. C. M. Barnum, Usability Testing and Research, New York: Longman, 2002. C. M. Barnum, E. Henderson, A. Hood, and R. Jordan, Index versus full-text search: A usability study of user preference and performance, Tech. Commun., 51 (2): 185–206, 2004. J. M. Carroll ed., Minimalism Beyond the Nurnberg Funnel: Technical Communication, Multimedia, and Information Systems, Cambridge, MA: MIT Press, 1998. D. K. Farkas and J. B. Farkas, Principles of Web Design, New York: Longman, 2002. J. T. Hackos, Information Development: Managing Your Documentation Projects, Portfolio, and People, 2nd ed., New York: Wiley, 2007. J. T. Hackos and D. M. Stevens, Standards for Online Communication: Publishing Information for the Internet/World Wide Web/ Help Systems/Corporate Internets, New York: Wiley, 1997. J. T. Hackos and J. C. Redish, User and Task Analysis for Interface Design, New York: John Wiley & Sons, 1998. P. S. Helyar and G. M. Doudnikoff, Walking the labyrinth of multimedia law, Tech. Commun., 50 (4): 497–504, 2003 (originally published 1994). ISO/IEC CD 26514. Software and Systems Engineering — User Documentation Requirements for Documentation Designers and Developers, (forthcoming in 2008). P. J. Lynch and S. Horton, Web Style Guide: Basic Design Principles for Creating Web Sites, 2nd ed., New Haven, CT: Yale University Press, 2002. Available http://www.webstyleguide.com/. Microsoft Corporation. Microsoft Manual of Style for Technical Publications, 3rd ed., Redmond, WA: Microsoft Press, 2004. A. Rockley. Managing Enterprise Content: A Unified Content Strategy, Indianapolis: New Riders Press, 2003. Section 508 of the US Rehabilitation Act, 1998. Available http://www.section508.gov/.
GEORGE F. HAYHOE George Hayhoe Associates, Aiken, SC
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5608.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Electronic Document Production Standard Article George F. Hayhoe1 1George Hayhoe Associates, Aiken, SC Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5608 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (347K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Overview Characteristics of Electronic Documents Developing Electronic Documents Using Electronic Documents Revising Electronic Documents About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5608.htm15.06.2008 20:18:49
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5610.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Engineering Notebooks Standard Article Elliott V. Nagle1 1Registered Patent Agent, Aiken, SC Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5610 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (72K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Professional Notebook-Keeping The Uses of Engineering Notebooks What Information is Collected in an Engineering Notebook? Who Prescribes the Rules for Entries into the Engineering Notebook? Rules and Guidelines for Notebook-Keeping Summary Appendix 1. Electronic Notebooks About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5610.htm15.06.2008 20:19:42
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
ENGINEERING NOTEBOOKS
111
ENGINEERING NOTEBOOKS If engineering work is to have lasting value, it must be recorded. Records are especially important in work such as testing and design, and professional notebook-keeping is absolutely essential for documenting research and development work. Engineering is man’s endeavor to manage and improve his activities and environment and, as with all human endeavors, it involves both successes and failures; however, engineers are people trained to learn and progress from experience. ‘‘While engineers can learn from . . . mistakes what not to do, they do not necessarily learn from successes how to do anything but repeat the success without change.’’ (Henry Petroski, To Engineer is Human, Random House, New York, 1992). Encountering the limitations of tools, materials, methods, principles, and ideas often teaches more than success. The engineering notebook records both successes and failures. However, not all engineers keep notebooks. Much of the work that falls to engineers is directed, that is, the engineering that is to be conducted is prescribed in detail by the engineer’s supervisor or client, or predetermined by the very nature of the task; engineers doing such work seldom keep notebooks. It is the responsibility of the person directing the work to keep the essential record. Engineers accustomed to doing directed work must recognize the occasions when unexpected occurrences require that they take initiatives based on their individual engineering skills, and immediately begin professional record-keeping.
PROFESSIONAL NOTEBOOK-KEEPING Professional engineering practice requires that the performance of engineering work and the results be recorded first in engineering notebooks in the handwriting of the engineer or persons responsible to the engineer. When information in separate forms—such as drawings, circuit diagrams, blueprints, graphs, forms used to collect data, photographs, computer printouts, and procedures—is essential to the original record, these are usually incorporated by reference but are sometimes affixed to a notebook page. Although all information about a project might be assembled in a collection of such documents, those compilations cannot serve all the purposes of an original notebook record. The log of an engineering project can also be maintained in electronic (computer) files (see Appendix 1); however, this presents difficulties when the record is needed (to be discussed here) as corroborative evidence in legal proceedings. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
112
ENGINEERING NOTEBOOKS
THE USES OF ENGINEERING NOTEBOOKS First, entering the record of an engineering project in a notebook is a tool for the engineer; second, this original record is the most authoritative source for reviewing the engineering work after it is completed. The discipline required to make a complete, concise record as the work progresses leads the engineer to make and/or focus on a plan of the work, and as the work progresses, compare it to the plan. The need to set things in writing causes the writer to organize thoughts and interpret observations that might otherwise be overlooked and/or forgotten. Notebookkeeping thus can yield moments of insight that might not otherwise occur. As the record builds, the engineer can review the work completed in the order in which it was conducted and reexamine it for indications whether or not it was conducted to the engineer’s satisfaction. Because engineers must be responsible for their individual work, questions about it can be expected to arise in their own minds as well as the minds of others; as a result, the engineer should expect to have to prepare reports. The engineer therefore needs the notebook to be able to recall every essential part of the work. The information in engineering notebooks is needed by the engineer’s coworkers, management, or client/customer, any of whom may be working on the same or a related project or may need to repeat the work. Patent professionals need original records for such purposes as understanding an invention, establishing the priority of an invention, and determining who the inventor or inventors are. For these reasons, records of the origin of information must be made, along with the contributions of individuals and the dates of their contributions. Most important, notebooks are often taken into courts of law as evidence for the truth of any assertions the engineer or the engineer’s employer or client may make regarding such matters as failure of designs or devices, injury to persons, or damage to the environment. In the United States, engineering notebooks may be used as evidence in proceedings within the US Patent and Trademark Office; for example, in interference proceedings. Interference proceedings are conducted to determine who was first to make an invention when two or more competing inventors or companies file patent applications with equal or overlapping claims. Although most engineering notebooks are never needed for examination in legal proceedings, in any case they are valued for offering recourse to the original engineering. They are therefore ordinarily preserved by the owners indefinitely against such possibilities as loss, destruction, or access by unauthorized persons. The professional engineer respects notebookkeeping and approaches it with the resolution to be thorough, completely honest, and forthcoming, and as objective as possible, and records credit to others where credit is due. Furthermore, the engineer should record the work and the results of the work without speculation or comment on the related work of others in the field; any statement that can possibly be given an adverse interpretation in subsequent legal proceedings will be given such interpretation by opposing counsel. Evidence of the engineer’s concern for these standards or lack thereof may be looked for along with the engineering information. Although the engineer cannot know when the note-
book will be examined, where, by whom, or for what purpose, the aim must be to keep a notebook that will stand any test for completeness, honesty, engineering ability, and professionalism. (The test for professionalism is whether the work is for the good of society.)
WHAT INFORMATION IS COLLECTED IN AN ENGINEERING NOTEBOOK? The notebook comprises a concise, complete, and authentic record of the engineering on a project as the work progresses that afterward can be read and understood. It includes, in the order in which the engineer deals with them, such topics as a statement of a problem, premises, procedures applied, essential observations and experience acquired, discussions of significant matters, names of persons who contributed significantly and indications of what their contributions were, and results and conclusions. The record will be concise if nothing is entered that is trivial or that can be incorporated instead by reference. The record will be complete if essential resources to the work and individuals who contributed importantly can be identified, dates on which the work took place can be verified, and a reader could repeat the work exactly as it was conducted.
WHO PRESCRIBES THE RULES FOR ENTRIES INTO THE ENGINEERING NOTEBOOK? Because engineers are professionals, they could be expected to fulfill the purposes of notebook-keeping without having rules prescribed for them; however, the information recorded by the engineer (and the notebook itself) is ordinarily the property of the engineer’s employer or client. That party has overriding concerns, among the most important of which are any need for support for proving or disproving legal matters. The engineer who is self-employed may have no one with whom to share responsibility for the work, and therefore must be especially attentive to notebook-keeping because of possible legal consequences. What goes into a notebook is therefore partly prescribed by certain rules that will make the notebook useful and beyond reproach in a court of law, and on balance prescribed by guidelines for good engineering. The prescriptions of the law professional are mainly based on concerns that the notebook not be susceptible to even a suggestion that the record is not authentic, ethical, and honest in every way. For example: • It must not appear that anything could have been altered dishonestly after its original date of entry. • All entries must clearly have been made in chronological order. • Corrections to original entries must be made in such a way as to plainly show that the correction was an honest one and when the correction was made. • The entries on each page must be signed by the writer who entered them, and on the date they were entered. • The record must show that it was regularly read and understood by a person or persons who affixed their signatures and the date they read and signed.
ENGINEERING NOTEBOOKS
RULES AND GUIDELINES FOR NOTEBOOK-KEEPING Notebooks are frequently used by an engineer to record his or her individual work. Also, two or more engineers working on the same project may keep notebooks for their contributions to the same project, in which case cross-references must be entered between the different engineers’ records. In addition, rather than a notebook being associated with a particular engineer, a notebook is frequently designated to be associated with a particular project or part of a project, or piece of operating or test equipment, in which case all assigned personnel make entries in the same notebook. In all cases, however, it is necessary that the record allow clear distinction and identification of each individual’s entries. Guidelines for notebook entries are discussed in the following paragraphs. Use the Proper Book. Start with a book that is bound with a strong stitched binding and printed with page numbers. (Any unbound notebook or adhesive-bound book that may come apart in long storage may be susceptible to the suggestion that pages could have been inserted or removed from the original record.) The convention is that the notebook pages are prepared for writing on one side only; the right sides are numbered and are the only place for official entries. For convenience, a notebook size is preferred that is suitable for machine photocopying of entries. Ruled paper should be used for the discipline it promotes, and quadruling is often preferred as it lends itself to entering information in the form of tables, graphs, and diagrams directly on the page. Preferably, the official pages are preprinted with, in addition to page numbers, a form at the top of each page for identification of the work in progress (title, project number, and so forth), a form at the bottom of the page that provides a place for the writer to sign and date his signature, and words such as ‘‘read and understood by,’’ ‘‘witnessed by,’’ or ‘‘witnessed and understood’’ (depending on the preferences of the engineer’s legal advisor) with a form for this person or persons to affix their signatures and the dates of their signing. Make All Entries Permanent. Use indelible ink for all entries in an engineering notebook. (Any other writing may not be preserved and, in any case, could not easily be proved not to have been altered.) An ink that lends itself to machine copying is preferred. Record the Basis for Beginning the Notebook. Identify the notebook with a serial number. (Ordinarily the notebooks used have a serial number already affixed by the printer at the designation of the owner.) On the first page enter the name of the engineering project or machine the notebook is to be associated with, the person or persons responsible for keeping the notebook, and the serial numbers of any related notebooks (such as a prior one the record is to be continued from, or notebooks kept by other personnel doing work on other parts of the same project). Designate and label pages to be used for a table of contents (TOC). Enter the topics and subtopics of notebook entries on the TOC in the order in which they are recorded and label each with the date of making the TOC entry. Make Changes and Corrections Understandable. Whenever it is found immediately after making an entry that it was not made as intended, or was made with a mistake, cross out the entry with one thin line (so the original remains plain to any
113
reader at a future time), follow the entry with an entry of the preferred notation, and initial and date the alteration. Label Regular Entries. Begin each page with an identification of the work (for example, by the title of a project, subproject, or entry, or perhaps a project number), and preferably the engineer’s own sequential number for a unit of work, such as Evaluation No. 23, Test Design IA, Experiment 46, Run IIIB, etc. Make Cross-References. Connect the record to that of prior or simultaneous related work. Write a concise statement for the background of the work, such as why the engineering is needed or who asked for it. If the record to be entered is to be a continuation of the record for a project other than that of the immediate preceding entry, include ‘‘continued from page __’’ to connect the entry to the page on which the last entry was made before the work was interrupted. List the Objectives. Write the clearest possible statement of the objective(s) of the unit of work, and enter any revision of objectives in the future at the point in the record where the revisions are made. Record a Plan for the Work. If a series of units of work is to be conducted, it will be useful to enter a plan. For a single unit of work, it will be helpful to enter the procedure that is desired to follow. (The notebook is the best place for the engineer to have such records to review, and helps direct the work of technicians and other engineers who may be participating in the work.) Record Procedures, Observations, and Significant Discussion. Insofar as possible, all implementations and observations made during the course of the work should be recorded at the work site and at the time they are made, or as close to that place and time as possible. Obviously, entries must be concise as well as complete. Descriptions of procedures and designs should not be entered when they can be found elsewhere. Often these require only a reference, such as to an entry on a previous page. The notebook should not be cluttered by entering anything that is generally assumed, widely known, or conventional; however, when such knowledge needs to be identified, a notation should be included that will lead any reader to a reference for the source. Data should be entered as collected so that any calculations and interpretation subsequently applied can be reviewed. Space is never allowed between entries for entering information at a later date. (That information can be entered when it becomes available with a reference to the earlier, related entry.) All unusual or unique aspects of the work should be noted and commented on as concisely as possible. The names of persons who contributed significantly to the work should be entered, with their individual contributions indicated. Close Off Each Entry. When a writer considers an entry complete for the moment, or another writer must make an entry, the first writer must draw a line across the page and sign and date the entry. Incorporate Printed Documents. Because entries must be complete yet concise, it is often useful to affix a clipping to the official page, such as part of an instrument chart, a computer printout, or an original sketch. In fact, such entries often provide the most direct, authentic presentation of what needs to be recorded. Such materials should be attached to the page with permanent glue and should be signed and
114
ENGINEERING WORKSTATIONS
dated by the originator. With some insertions, such as photographs, the insertion should be attached in a way that allows access to any identifying information that may be on the reverse side. Law professionals also advise that documents that are part of the engineering record and not affixed to the notebook should always be signed and dated by the originator, labeled with a cross-reference to the notebook or other document which is uniquely identified, and in most instances, signed and dated by witness. For additional information, see Ref. 1. Use Unofficial Pages for Nonessential Entries. Because most engineering notebooks are prepared for official entries on only the right-hand page, the reverse sides or left-hand pages are ordinarily used by the engineer for notes and calculations that are entered there for convenience (‘‘scratch pad’’) rather than for the record. It is in fact desirable to enter calculations on the unofficial page facing the official record; should some error be found in a numerical record, it will then be possible to check the original calculation of that record. Moreover, printed materials are often affixed to the left-hand page when it is helpful to have them there for reference but not essential to have them there for understanding the official record. Summarize Results and Conclusions. Refer to or summarize such matters as the significant results calculated from raw data; enter a summary description of what was designed, built, or accomplished relative to the objectives of the work; enter the implications or significance of the work. Include Essential References. Identify references either in context (parenthetically) or collected under a separate heading. These are needed only when information or principles are applied that were innovative or at least come from sources not likely to be known by or readily accessible to a reader. Complete Every Page. When it is decided that no more entries will be made on a page (such as at the completion of a work unit), the person completing the page should proscribe the use of any remaining space by drawing a diagonal line across the space and labeling the diagonal with his or her initials and the date. However the page is completed, the person doing so must immediately sign and date the bottom of the page and, at the earliest convenience of all parties, have one or two persons who are not working on the project (witnesses) sign and date the page. The dates of their signing are important to prove the existence of the record at the time the witnesses read it. All pages must be witnessed in this manner; it cannot be known what matter may come to be focused on in a court of law, or when, or for what reason. Choose Appropriate Witnesses. Legal advisors differ as to whether the persons asked to affix their signatures on the pages of the engineer’s entries should be certifying that they have read and understood the entries, or should be certifying something else, such as that the page was complete and authentic when they signed it. In any case, witnesses should be selected who are disinterested; that is, they have no concern with personal gain or loss from the recorded work. These persons therefore need not have observed the work being carried out; however, persons chosen should be near enough to the work and work site to be aware of the engineer’s work, so that they could, if necessary, answer questions relative to the authenticity of the entries on the page.
SUMMARY Professional engineering practice requires that the engineer or persons responsible to the engineer keep a concise, complete and authentic handwritten record in a bound notebook from the beginning to the end of a project. The record must be made in a manner such that another engineer could repeat the work solely on the notebook record. Furthermore, because the engineer’s employer or client—or the engineer in his or her own right—may need the notebook to support or disprove assertions made in legal proceedings, the notebook must be kept without any gaps or irregularities or hints of such that could interfere with its use as corroborative evidence. What goes into a notebook is therefore partly prescribed by certain rules that will make the notebook useful and beyond reproach in a court of law, and on balance prescribed by guidelines for good engineering. Rules and guidelines for notebook-keeping are ordinarily prescribed by employers and their legal advisors, and the recommendations offered herein were prepared with the same considerations. APPENDIX 1. ELECTRONIC NOTEBOOKS In this age, when more and more information is being gathered by computerized devices, processed by computers, and stored by computers, engineers are abandoning pen and paper in favor of writing on computers. In fact, the log of an engineering project can often be more easily maintained on a computer. However, this presents difficulties when the record is needed to corroborate evidence in legal proceedings. As most files kept on computers can be easily modified without leaving any trace of the modification, much more extensive testimony is required to establish and defend how the record was created and maintained. For this reason it is desirable to keep a sufficient contemporaneous written record to corroborate the electronically stored data. Recent literature (1) has described several approaches to increasing the evidentiary value of a set of electronic files. However, until such procedures are widely proven and recognized, engineers will have to weigh the ease of making an electronic record that can be made reliable for legal proceedings only with great difficulty, with the inconvenience of keeping a handwritten notebook that will be an authentic, reliable record. BIBLIOGRAPHY 1. P. D. Kelly, Keeping Research Records, in Successful Patents and Patenting for Engineers and Scientists, Michael A. Lechter (ed.), Piscataway, NJ: IEEE Press, 1995.
ELLIOTT V. NAGLE Registered Patent Agent
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5611.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Information Search and Retrieval Standard Article Michael Culbertson1 1Libraries Colorado State University, Fort Collins, CO Copyright © 2007 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5611.pub2 Article Online Posting Date: June 15, 2007 Abstract | Full Text: HTML PDF (64K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases ❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
Abstract The communication of information is essential for any discipline. Scholars and practitioners cannot work in isolation; a free flow of ideas is vitally important to the practice of a profession or discipline. Electrical engineering and electronics are no exception. This article will concentrate on how information is transmitted through the formal media in electrical engineering, and allied subjects, and how such information is retrieved. Both print and electronic media will be considered. Introduction Forms of Communication in Electrical Engineering Bibliographic Tools Document Retrieval Summary Keywords: information; communication; journals; publications; patents; peer review; proceedings; standards; patent About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5611.htm15.06.2008 20:20:02
INFORMATION SEARCH AND RETRIEVAL
FORMS OF COMMUNICATION IN ELECTRICAL ENGINEERING Scholarly Journals
INTRODUCTION Since humans first began to use tools, there has been a need to convey technological information. The earliest transmission of such information was by word of mouth. For thousands of years instructions in the making and use of tools were handed down from generation to generation within families or small communal groups. The invention of writing, in about 3400 BC, expanded the audience for technological information to those who could understand written symbols (1). Indeed, much writing in the ancient world concerned the transmission of technical information. The invention of paper by the Chinese further expedited communication. In the Middle Ages, scholars dispatched personal letters to announce new discoveries. Echoes of this practice can still be found in publications such as Electronics Letters, which is dedicated to the rapid communication of research results. The invention of the printing press in 1453 created the potential for distribution of information to a mass audience. The partial realization of this potential was a factor in the scientific revolution two centuries later. Hooke, Newton, Leibnitz, and their colleagues published books and broadsheets describing their theories and experimental results. These publications stimulated thought among the small but ever-growing body of the scientific and technologically curious. This outpouring of scientific activity in the late seventeenth century led to the establishment of the Royal Society, the first institution dedicated to the encouragement of scientific research. At first, the Society was a discussion group, a sounding board for new ideas. Gradually, it developed more formal means for the communication of new scientific knowledge. By the early nineteenth century, when Michael Faraday began the serious study of electricity and its properties, the Royal Society was publishing the first modern scientific journals. The defining features of these publications were both a dedication to communication of new scientific knowledge and a rigorous critical review of such communication. Within 50 years, early professional societies in engineering had adopted the model of the scientific, or scholarly, journal. Other media have also had prominence in the communication of knowledge within electrical engineering. Patents were probably the primary source of communication among electrical engineers and inventors in the nineteenth and early twentieth centuries. Edison, Tesla, Westinghouse, and many of their engineering contemporaries announced the results of their work almost exclusively via the means of patents. With the advent of computer technology, the dissemination of information has undergone another revolution. The defining characteristics of this revolution have been increases in storage capacity, processing power, and speed. Engineers can now transmit masses of data with the touch of a few keystrokes. These developments promise to produce new, and heretofore unknown, forms of scientific and technological communication.
The medium of the scholarly journal is still paramount as a means of communication among scientists and engineers. The discipline of electrical and electronics engineering is no exception. A common feature of most scholarly journals is that they are peer-reviewed. Peer reviewing is a process that has evolved to lend assurance of scientific accuracy. Articles submitted to these journals are sent to reputable, impartial practitioners in the discipline. These reviewers then check the articles submitted to them for accuracy of research methods, relevance to the discipline, and plausibility of results. Normally, this process is one of blind review, meaning that the identities of authors and reviewers remain unknown. Scholarly journals are often published by established professional societies in each discipline. In electrical engineering, two such societies, the Institute of Electrical and Electronics Engineers (IEEE) and its British counterpart, the Institution of Electrical Engineers (IEE), publish the most prominent English language journals. The IEEE is particularly well known for its Transactions series of publications, which are peer-reviewed journals devoted to specific aspects of electrical engineering. The IEE publishes a number of scholarly journals as well. Chief among these journals are: IEE Proceedings, IEE Review, and Electronics Letters. The American Institute of Physics also issues journals, which are of interest to electrical engineers, including: Journal of Applied Physics, Physics of the Solid State, Semiconductors, and Technical Physics. In addition, various commercial publishers, such as John Wiley & Sons, Springer Verlag, Elsevier Scientific, and McGrawHill, publish scholarly journals in electrical engineering. Some scholarly journals are now available through a process known as open access. In this model costs of distribution are kept to a minimum by mounting articles directly on the Internet. Costs of production are typically handled through page charges to authors rather than subscription fees. For more information about open access see the Scholarly Publishing & Academic Resources Coalition (SPARC) Web site at: http://www.arl.org/sparc/. Other publications concentrate on the news or current events in electrical and electronics engineering. A good example of this type of publication is the IEEE Spectrum. Although these titles normally do not publish research results, they would include reports on trends in research. Conference Proceedings Research results and other scholarly information are also regularly disseminated through professional and scholarly conferences. This process normally works in two ways. First, papers are presented orally to the conference attendees. Then, these papers are published in the proceedings of the conference. Many conferences are sponsored and organized by professional societies and are often held annually or at other regular intervals. Conferences are also organized by universities, institutes, government agencies, and international organizations. Conferences provide opportunities for electrical and electronics engineers to inter-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Information Search and Retrieval
act with colleagues from other parts of the world. Indeed, they sometimes provide unique occasions for communicating the ideas and views of engineers who, because of language, lack of communication infrastructure, political climate, or other reasons, may be isolated from professionals and scholars with similar interests. Copies of the written record, or proceedings, of conferences are usually provided to attendees as part of the registration package. In addition, proceedings are normally published by the sponsor of the conference or by a commercial publisher. Published proceedings are offered in limited runs and are usually purchased exclusively by libraries. Research findings and other information presented at conferences can be subject to peer review. Often, however, more preliminary findings are presented at conferences, preparatory to later publication in scholarly journals. Standards Standards are a particularly important form of communication in electrical engineering. Standardization in engineering can be described as: “The process of establishing by common agreement engineering criteria, terms, principles, practices, materials, items, processes, equipment parts, and components (2).” In the United States, the most prominent standardization agency for electrical engineering is the IEEE. Committees composed of members develop standards within the IEEE. These groups normally compose a draft standard, which is then submitted to the IEEE Standards Board for final approval. IEEE standards are now also available online. The Underwriters Laboratories (UL) issues standards for electrical products and appliances. Other societies issuing important standards in electrical engineering include: Comite Consultatif International Telegraphique et Telephonique (CCITT) and the International Electrotechnical Commission (IEC). General engineering standards, which often affect electrical and electronics engineering, are issued in the United States by the American National Standards Institute (ANSI) and internationally by the International Organization for Standardization (ISO). Patents Patents are an essential part of the information infrastructure in electrical engineering. The primary purpose of patents is to protect the rights of inventors by granting them temporary monopolies over their inventions. Patents are issued by national governments, which means that differences can exist from country to country in terms of format, procedure, and scope of protection. Patents also serve as a rich source of information in engineering. In applying for a patent, inventors are required to state all they know of the invention. In most cases, this information is not available elsewhere. In fact, it has been shown that up to 90% of patent documents are never reported in the journal literature or in other media (3). Patents are usually available to interested parties through government patent offices and, in some cases, in selected libraries. The United States, along with most other patent-granting countries, issues a patent gazette that provides an extensive abstract of each patent and, often, drawings. A drawback is that these
publications are usually arranged by an arbitrary numbering system. To complete a proper search of this literature requires a patent index, most of which are available in electronic form. The United States Patent Office offers online searching via the World Wide Web at: http://www.uspto.gov. The Patent Office also provides full-text of patents. To view the images in patents, however, a Tagged Image File Format (TIFF) plug-in must be installed, which can be done by following the links on the Patent Office search page. Patent databases are available commercially as well. Two of the best known are the Derwent Patent Citation Index and the Derwent World Patents Index. Government Information Government reports and regulations are a valuable and necessary source of information in electrical engineering. Various government entities carry out basic and applied research. Often, this research is only reported in government publications. Laws and regulations govern all types of electrical engineering practice. Good examples are those dealing with the disposal of hazardous waste. Government agencies and commissions are often instrumental, through their reports and other publications, in setting national policies, which affect the practice of engineering. In the United States, this data, and other information from the national government, is provided to the public through a system of depository libraries. United States government documents are increasingly available on the Internet as well. These documents can often be found by entering the website of the agency that issued them (i.e., Department of Energy, Environmental Protection Agency). Another option is to access the Government Printing Office website at: http://www.gpo.gov. Other governments, particularly in developed countries, have somewhat similar programs, usually coordinated through national libraries. Governments also provide funding for engineering research carried out in universities, institutes, and private engineering firms. Reports based on this research are, in many cases, subsequently published by government agencies or by private firms with exclusive rights to its publication. In the United States, the National Technical Information Service (NTIS) is a private firm that acts as a clearinghouse for this type of information. The NTIS maintains a database to provide access to these reports and sells them to interested parties. Not all government reports are available to engineers and other members of the public. Most governments have established classification schemes for restricting access to sensitive documents. Often, but not always, these documents relate to national defense. Gray Literature The report literature, or gray literature, is a valuable, yet elusive, source of information for electrical engineers. The gray literature refers to internal reports, memoranda, unpublished manuscripts, notes, and other forms of information that are rarely indexed or rarely find their way into library collections. Clues to the existence of such information can sometimes be found in the citations of journal articles or in specialized bibliographies. Obtaining these reports, however, can be difficult. Requests to one or more of the
Information Search and Retrieval
authors can be the most efficient way to obtain this material. Requests to corporate archives and documentation centers can also be successful. In some cases, this kind of information can be located on the Internet. Complicating the situation is the fact that these reports are often proprietary and, thus, not available to the public. BIBLIOGRAPHIC TOOLS Large amounts of information, however valuable, are not much use without the tools designed to locate and use it efficiently. In electrical engineering and electronics, a number of such tools exist. These tools usually take the form of databases and indexes that provide access through a number of different search parameters, such as subject, author, and title, to the literature of disciplines, subdisciplines, and topic areas. Indexing sources can assume a variety of different formats and can appear in various media. Paramount among these media are print, CD-ROM, and online. Common to all indexing sources is the concept of the record. A record, in indexing terms, is a unit of information that describes a specific published entity (although some indexing records describe unpublished manuscripts as well). Such entities can assume a number of different forms: articles, conference proceeding papers, books, government reports, Internet publications, and others. All records provide certain basic pieces of information about published entities, such as author(s), title, subject or keyword, and, if describing an article, the source periodical where the article can be found. Indexing sources that provide access to the literature in scholarly journals and conference proceedings often provide abstracts as well. Abstracts provide a description of a scholarly paper. Normally, this description is a paragraph in length, but it can be more extensive. Electronic Indexes Increasingly, bibliographic databases in electronic media have become the most heavily used tools for accessing the literature of electrical engineering. The reasons for this development are many; databases provide almost instantaneous access to search results across a number of years, whereas print indexes rely on laborious search processes, which cover one year or, at best, a short span of years. A further advantage of databases is that, with the advent of online access through the Internet, searches can be performed from offices and homes, maximizing efficient use of libraries. To provide access to their records, most databases use a form of boolean logic in which sets of record addresses are established that conform to search parameters provided by the user. The usual pattern is that a set, containing “x” number of records, is established, which is then modified by further sets. For example, a search of the term “semiconductors” might be modified by “gallium arsenide.” The three main boolean connectors used in bibliographic searching are: “and,” “or,” and “not.” “And” is used to form sets in which each record contains all subject terms entered via a specific search statement. “Or” forms sets in which alternate terms are included, such as “Very Large Scale Integration” or “VLSI.” The purpose of the “not” connector is to exclude records that mention the indicated subject terms.
3
Some databases also use proximity connectors. These produce records in which subject terms must occur within a defined proximity to each other, such as a sentence, a paragraph, or within a specific number of words. Another standard tool for database searching is truncation. This tool is a variation on the “or” connector in which the root of a subject term is defined and records containing all variations of that root are retrieved. For example, a truncation search on “circuit*” would retrieve records that contain “circuit,” “circuits,” or “circuitry.” A more refined method of searching is to define the search to a specific field, such as subject, author, or title. The advantage of this method is that an undefined search might produce hundreds or perhaps thousands of records, whereas a search limited to specific fields will tend to produce not only a smaller number of records, but records that are more focused to the result the searcher wishes to obtain. For even greater precision in subject field searches, many databases also employ controlled vocabularies of subject terms, which lends precision by establishing authority control for subject terms. Authority control means that one master subject term is assigned to records on appropriate topics, in place of several variants. This practice tends both to increase the number of records retrieved by a search and to focus the scope of the records retrieved more efficiently. INSPEC is widely regarded as the premier bibliographic database in electrical and electronics engineering. The INSPEC database was developed by the Institution of Electrical Engineers and is still managed by the IEE. The scope of this database is: “. . . the worldwide literature of physics, electronics and electrical engineering, computers and control, and information technology (4).” It covers a wide range of publications in these areas; however, the primary focus is on papers published in professional journals, as well as papers presented at scholarly conferences. In these areas, INSPEC is particularly comprehensive, indexing a number of titles that are ignored by other abstracting services. Like other high level indexing services, such as MEDLINE andPsychInfo, INSPEC has its own classification scheme. Although use of this scheme is not necessary to successfully retrieve records from the database, it can lead to more precision in searching. The scheme is hierarchical. The database is divided into four sections, Section B covers Electrical Engineering and Electronics. Four levels exist within each section, each of which denotes a progressively more specific subject area (5). Records in INSPEC can also be retrieved using keyword, title word, author name, and other search parameters. Although INSPEC has a long history as a print index, it is now typically used on the Internet, usually through site licenses purchased by universities or businesses. The IEEE offers its own bibliographic searching capability, along with electronic access to the full text of its published papers, through IEEE Xplore. This search capability is limited to journals, conference proceedings, and standards published by the IEEE. Within that scope, comprehensive searching is available. It is also possible to searching using INSPEC subject terms. If one has access to IEEE publications online, the search results on IEEE Xplore will link to the full text of the retrieved publications.
4
Information Search and Retrieval
Another database of importance to electrical engineers is Compendex. This file is based on the Engineering Index, which has been the standard bibliographic index to the literature in engineering since the 1880s. Its scope is all areas of engineering, along with allied areas, such as construction management. Like INSPEC, the Compendex database concentrates on indexing professional journals and the papers presented at scholarly conferences. Some technical reports, books, and increasingly, Internet sites are included as well. The scope of the database is worldwide. Compendex is, in fact, one of the chief vehicles by which engineers in Englishspeaking countries may learn of research reported in other languages. Subject access to the database is accomplished through a controlled thesaurus of subject terms called the Subject Headings for Engineering. Keyword searching is also available, as is limiting of searches by discipline, year, and type of publication. The Web of Science, a product of the Institute for Scientific Information that incorporates its Science Citation Index, is also of interest to electrical engineers. The unique feature of this database is that it indexes papers that have been cited in the scientific and engineering literature. This indexing tends to produce a file that focuses on more influential articles. Also, because of its orientation, Web of Science offers the capability of following the citation trail of particular articles or authors. This type of searching can uncover clues to the influence of particular ideas and discoveries as they reverberate through the literature. To make such searching more efficient, direct links are available both to cited papers and to papers that have cited them. Increasingly, links are also available through Web of Science to electronic full text of scientific papers. For engineers involved in the production of electric power, the DOE Information Bridge website (http://www.osti.gov/bridge) is valuable as well. Produced by the United States Department of Energy, it includes full text of: “. . . unclassified scientific and technical information processed or received by the DOE Office of Scientific and Technical Information (6).” References to the published literature in all areas of energy, including alternative energy sources, are included. It includes online full text of reports published from 1994 to the present. Another form of bibliographic database is the table of contents or contents alert service. The most prominent files in this area belong to Current Contents. Current Contents was first published by the Institute for Scientific Information in 1958 and, in the intervening years, it has become a staple in the research efforts of scientists and engineers. In its print format, Current Contents reproduces the title pages of prominent journals in scientific subject areas, one of which is engineering. This tool has proven to be a convenient method for scientists and engineers to learn of topics in the current literature. In recent years, Current Contents has become primarily available as an online electronic file. Electronic databases such as Compendex and IEEE Xplore also offer content alert services. Subject Bibliographies Subject bibliographies constitute an additional type of research tool. Often, these publications give a comprehensive
view of the literature in specific subject areas (for example, microprocessors). Subject bibliographies may include annotations, or short descriptions, much like abstracts, of each referenced item. A variant of the subject bibliography is the guidebook, also known as a guide to the literature. These sources provide a useful way of identifying the major information sources in a given discipline. DOCUMENT RETRIEVAL Libraries Once articles, books, and other documents have been identified, it is still necessary to retrieve them. Traditionally, retrieval has been best accomplished through the library at the university, institute, laboratory, or company with which an engineer is affiliated. This retrieval method remains an efficient option. The material is on site and readily accessible through subject arrangements, such as Library of Congress classification. Modern libraries have welldeveloped capabilities for accessing electronic databases, electronic journals, information on the Internet, as well as print resources. Most libraries also offer services for obtaining documents from other institutions. Libraries also offer comprehensive reference services, designed to address research strategies and problems. Some libraries, notably the University of Michigan and others in cooperation with Google, are creating “virtual libraries,” which exist on the Internet and are not confined to a physical space (7). Document Delivery In recent years, additional document delivery options have become available. ProQuest and other companies act as clearinghouses for theses, dissertations, and published papers, which they then provide for a fee. Both professional societies and vendors of bibliographic databases also offer services for retrieving full-text copies of published articles. Increasingly, these services are available online through a subscription fee. Copyright Copyright considerations are a major concern in establishing these services. Most vendors include the copyright fee as part of the overall price for each document. In the United States, payment of copyright fees is then handled through the Copyright Clearance Center in Danvers, Massachusetts. The center acts as a clearinghouse for copyright compliance and makes payments to authors. It also maintains close ties with the U.S. Copyright Office (8). Internationally, the World Intellectual Property Organization (WIPO) works to provide copyright protection in the countries that are parties to the various treaties and conventions that govern intellectual property. Electronic Journals Beginning in the mid-1990s, published information began, increasingly, to appear online, particularly through the medium of the Internet. One manifestation of this phenomenon has been the electronic journal or e-journal. Al-
Information Search and Retrieval
though e- journals can take many forms, it is most common for those in science and engineering to replicate what is available in their print versions. When articles are accessed in electronic journals, they usually appear in one of two formats, HTML (Hypertext Markup Language) or PDF. HTML is the standard programming code for the World Wide Web. Articles coded in HTML have the advantage of ready accessibility, normally within one level of the table of contents of an e-journal. The drawback of this format is that it does not permit the representation of characters, such as mathematical notation, which are outside the bounds of plain text. PDF, on the other hand, represents an exact digital copy of the printed page. The disadvantage here is that to access the PDF representation of an article, one must enter a program that allows viewing of text in that format, a process that can involve several steps. The most popular of these programs is Adobe Acrobat. This situation is mitigated by the fact that Adobe Acrobat Reader is available, free of charge, on the World Wide Web. As a result, and because of the obvious advantages of representing exact images of journal articles online, PDF has become the format of choice for electronic journals in science and engineering. Google Scholar In recent years, Google has emerged as the most popular Internet Search Engine. In 2004, Google unveiled a new product, Google Scholar. The aim of Google Scholar is to provide access to the academic and professional literature (9). Unlike traditional indexing sources, Google Scholar does not use a controlled vocabulary but uses Google’s spider software technology to discover and make use of freely available metadata, or keywords placed in certain HTML codes. As the metadata and the records come from many different sources, it tends to produce an unstructured set of results. However, because Google Scholar is available at no cost and other indexes of the academic literature usually require purchase of a subscription license that can cost thousands of dollars, Google Scholar has become popular among professionals and students, particularly those who do not have access to the resources of large research universities. Internet Sites The Internet opens electronic possibilities that are unknown in other media. E-mail has transformed communication, first among scholars and now throughout the general population (10). On the World Wide Web, the changes have been even more striking. Data sets are now routinely linked to research articles. Links are provided directly to cited works. Photographs and other illustrations can be included with articles or drafts and transmitted with ease. Video can be included to more fully illustrate important concepts. Blogs (Weblogs) and RSS (Really Simply Syndication) feeds help engineers, scientists, and other professionals to keep instantly up to date on advances in their fields. In short, the infrastructure of how information is stored and retrieved is changing rapidly. Evaluation of Internet sites is an important consideration for anyone who uses this resource. An obvious, but
5
often overlooked, mode of evaluation is to examine what person or entity produced the site. Many of the best sites for electrical and electronics engineers are produced by the same entities that produce the best print resources, such as professional societies, universities, publishers, and government agencies. An example of a valuable website for electrical engineers is that of the IEEE (http://www.ieee.org). The site presents a number of different options from the initial page, or home page, and it is not only attractive but easy to navigate as well. Information is grouped under intuitive headings, such as member services, and more than one path to a particular page within the site usually exists. The degree of editorial control is as important a consideration for evaluating a website as it is for a print source. As constructing a Web page is relatively easy, information in all gradations of quality and truthfulness shows up on the Internet. All information should be viewed with a critical and skeptical eye, which is especially true of Internet sites. SUMMARY Electrical engineering is a young discipline; the basic discoveries were only made in the 19th and early twentieth centuries. However, during the course of its existence, not only the infrastructure but also the scope of how information is communicated within the discipline has changed dramatically. When Michael Faraday first published his findings on the nature of electricity, they were available only to a tiny number of the scientifically curious. Now, scholarly and professional articles can be transmitted instantly to a potential audience of millions. The Internet, in particular, makes it possible for anyone who wishes to communicate scientific information to bypass the traditional gatekeepers of the dissemination of such knowledge. How this situation will be resolved is an open question. What is certain is that a revolution is occurring in the manner in which information is communicated. Many liken it to the revolution, both technical and social, that occurred in the aftermath of the invention of the printing press. Electrical engineers made many of the discoveries in electronics, computer hardware, and telecommunications, which led to this revolution. Increasingly, the discipline of electrical engineering will need to grapple with the changes in information retrieval and use that are occurring at an ever more rapid pace. BIBLIOGRAPHY 1. McNeil, I. Basic Tools, Devices and Mechanisms.In An Encyclopedia of the History of Technology;McNeil, I., Ed.; Routledge: New York, 1990. 2. Parker, S. P., Ed.; McGraw-Hill Dictionary of Engineering. McGraw-Hill: New York, 1984. 3. Oppenheim, C. Patents and Patent Information.In Information Sources in Engineering, 3rd ed.;Mildren, K. W.;Hicks, P. J., Eds.; Bowker-Saur: London, 1996. 4. Knight-Ridder Information. Complete Database Catalogue, 1997. Knight-Ridder Information: Mountain View, CA, 1997. 5. Institution of Electrical Engineers. A Classification Scheme for the INSPEC Database. The Institution: London, 1995.
6
Information Search and Retrieval
6. Holmberg, E. E., Ed.; Gale Directory of Databases, Volume 1: Online Databases. Gale Research: Detroit, MI, 1997. 7. University of Michigan. http://www.umich.edu/news/index. html?BG/google/index(accessed November 17, 2006). 8. Copyright Clearance Center, Inc. http://www.copyright.com/ (accessed November 16, 2006). 9. O’Leary, M. Google Scholar: What’s In It for You? Information Today 2005, 22,pp 35, 39. 10. Kinnaman, D. E. Sentimental Longings: Changing Education with Technology. Technology Information 1997, 17,p 70.
MICHAEL CULBERTSON Libraries Colorado State University, Fort Collins, CO
470
INTELLECTUAL PROPERTY IN ENGINEERING
INTELLECTUAL PROPERTY IN ENGINEERING Engineering information is a nebulous term used in many different contexts with a wide variety of meanings. The term has classically been used in connection with the results of research and development, innovation, and invention, and with documents, drawings, and data describing or relating to technology. However, the term may also be considered to encompass expertise and other forms of knowledge (know-how). Engineering information is a form of intellectual property. The broader category of intellectual property covers a gamut of intangible assets, such as legal constructs reflecting rights in various types of engineering information and other forms of intangible assets—utility patents, design patents, plant patents, copyrights, mask work registrations, trademarks, trade secrets, and trade dress. THE DIFFERENT TYPES OF INTELLECTUAL PROPERTY Information, Data, and Know-how The term ‘‘know-how’’ is often used in connection with engineering information, and with other types of business information as well. Know-how can be embodied in many forms. It can comprise physical embodiments of technical or business information, such as documents or electronic media. It can also be the intangible personal knowledge and expertise of employees. There is, however, no universally accepted definition of the term. For present purposes, ‘‘know-how’’ will be used as a generic term, generally defined as accumulated practical skill, expertise, data, and information that (1) relate to a company and its operations or (2) facilitate carrying out, manufacturing, or performing any form of industrial procedure or process. Examples of technological know-how include manufacturing processes, specifications, manuals, engineering notebooks, blueprints, vendor and parts lists, inventions, technical developments, and skill and expertise in operating equipment and instrumentation. Examples of business know-how include strategic business plans, marketing plans, internal procedures, sales techniques, and client lists and files. In addition to encompassing both engineering information and business information, and both physical and nonphysical embodiments, know-how may also be categorized as either proprietary or non-proprietary. The primary distinction between proprietary and nonproprietary know-how is the extent to which certain legal protections apply. Proprietary know-how, sometimes referred to as ‘‘trade secrets’’ or ‘‘confidential information,’’ is know-how that, unless J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
INTELLECTUAL PROPERTY IN ENGINEERING
obtained from the owner of the know-how, cannot be derived or, at least, cannot be derived without substantial effort and expenditure of time and money. As will be discussed, legal protection of trade secrets is controlled, for the most part, by state law. To qualify as a legally protectable trade secret in most states, the know-how must not be generally known in the industry, must be subject to appropriate measures to maintain its secrecy and may not be disclosed to any entity that is not also obligated to maintain the know-how in confidence. As will be discussed in more detail, rights in proprietary know-how are typically maintained through physical security procedures and by imposing an obligation of confidentiality on all entities permitted access to the know-how. The obligation of confidentiality is typically imposed through terms in agreements such as confidentiality, employment, development, supply/vendor, manufacturing, foundry, and license agreements. Nonproprietary know-how is information that is generally known in an industry, or basic skills or practices employed in an industry. A typical example would be the skills and knowledge acquired by an employee resulting from being trained in the operation of a commercially available machine. As a practical matter, nonproprietary know-how can best be protected by retaining employees. Under some circumstances contractual noncompetition provisions (obligations not to engage in competitive activities) with employees can be used to prevent competitors from getting the benefit of a company’s investment in employee training. However, such agreements are often difficult to enforce; courts will typically enforce a noncompetition provision only if the scope of prohibited activities, geographic scope, and duration are reasonable under the circumstances. Establishing procedures for recording the details of processes, methods, techniques and data used by skilled employees and for retaining possession of those records also accords a modicum of protection; at least the know-how will not be entirely lost should the employee leave. A related term is ‘‘show-how’’ which sometimes is used to refer to nonproprietary know-how that is communicated to the recipient of technical assistance and/or provided in connection with a license or assignment to enable use of the licensed or assigned rights. Inventions In general, inventions are new technological developments or discoveries produced or created through the exercise of independent investigation and experiment. Inventions may constitute know-how (typically trade secrets) and may be protected as such or as the subject of patents granted by the governments of various countries. Industrial Designs and Design Patents The term ‘‘industrial design’’ tends to mean different things from country to country. In general, however, ‘‘industrial design’’ typically refers to the appearance and nonfunctional aspects of a product. The scope of protection granted with respect to an industrial design tends to vary from country to country. Utility Patents In general, a patent is the grant of a privilege or authority by a sovereign government. As commonly used with respect to
471
engineering information, a patent is the grant of the exclusive right to make, use, offer to sell, sell, and import an invention. Patents are territorial in nature; they are enforceable only within the territory of the government granting the patent. Typically, patents are granted country by country. In most countries, to be patentable, an invention must be ‘‘novel’’ (not already known to the public) and involve an ‘‘inventive step’’ (be nonobvious to a person skilled in the relevant area of technology). The patents of the various countries differ widely in scope and effect. However, under various international conventions and treaties, an applicant for a patent in any one of the signatory countries is accorded certain rights in the other signatory countries. Utility Model (Petty Patent). Some countries grant petty patents (sometimes referred to as ‘‘utility models’’) on functional elements of a product or process of minor importance, which may not meet minimum requirements for a patent. The term of a petty patent is typically shorter than the term of a patent, and the level of protection is lower. Copyrights A copyright is an exclusionary right provided to the author of an original literary or artistic work. In general, the copyright provides a remedy for unauthorized copying of artistic or literary expression in the work. It does not, however, protect ideas, concepts or methods, and therefore does not extend to use of the underlying logic or information contained in the work. The scope and effect of a copyright varies from country to country. In most countries, copyrights come into existence upon the creation of the work, and registration or compliance with other formalities is not required for protection. Registration does, however, often provide procedural advantages. In some countries, individual authors are granted the right to protect the integrity of a work and to prevent false attribution of authorship. These are called ‘‘moral rights’’ and are personal and nontransferable. Mask Works A ‘‘mask work’’ is defined as a series of related images, however fixed or enclosed, that represent three-dimensional patterns in the layers of a semiconductor chip. A mask work registration is a statutory right in a mask work submitted for registration. Mask work protection is, in effect, a hybrid of copyright and patent protection having a registration system (similar to copyright) and a threshold ‘‘not commonplace’’ requirement (similar to patent novelty). Mask work protection provides a remedy for reproduction, importation, and distribution of chips embodying a registered mask work. Trademarks, Reputation, and Goodwill Trademarks provide only indirect protection to engineering information. Trademarks and service marks (collectively, marks) are words or symbols used to distinguish one entity’s products or services from those of another. A mark identifies the source of the product (or service) and, in effect, connects the goodwill and reputation of a business to its products and services. Under the laws of most countries, a trademark owner has a remedy when a competitor uses a mark that is the same as or similar to the owner’s mark.
472
INTELLECTUAL PROPERTY IN ENGINEERING
Marks serve the dual purpose of protecting the interests of both the owner of the mark and consumers. If all products or services bearing a particular mark come from the same source, consumers can rely on the mark as an indication of consistency in quality. The trademark law is intended to prevent the use of any mark with products or services that is so similar to the use of a mark of another entity that consumers are likely to be confused as to the source of the products or services, or as to sponsorship or affiliation between the providers. From the perspective of the mark owner, the trademark law prevents others from attempting to pass off their goods as those made or sponsored by the mark owner or otherwise capitalizing on the mark owner’s reputation and goodwill. In this way, a trademark protects the market value of the company’s reputation and goodwill, as well as protecting investments in advertising and other promotional activities used to develop goodwill. In some countries, exclusive rights to use a mark can be acquired without registering the mark with the government. In the United States, rights can be established in any of the states through actual use, and federally through being first to (1) actually use the mark in federally regulated commerce or (2) file a federal application, followed by timely actual use and registration. Federal (US) registration can effectively extend the geographic scope of the rights to all states and provide additional remedies. Other countries, however, require registration as a prerequisite to any exclusive right in the trademark and grant ownership to the first to file an application, without regard to ownership in other countries. In these countries, there is a substantial risk of trademark piracy for companies that fail to register their trademark before introducing their products. Trade Dress Trade dress, in general terms, is the appearance and packaging of a product. Where trade dress is sufficiently distinctive and begins to identify the source, origin, or sponsorship of a product, it can, under the laws of some countries, in effect become a trademark. International Treaties The Paris Convention. Under the provisions of the International Convention for the Protection of Intellectual Property (the Paris Convention), if the applicant for a patent or trademark registration in one member country files a corresponding application in any other member country within twelve months of the date of the original filing (or within six months of the filing for a trademark application design patent), the corresponding application is treated as if it had been filed at the same time as the original application. Patent Cooperation Treaty. The Patent Cooperation Treaty (PCT) provides a mechanism for filing a single application for patents in a number of countries. The application is initially filed in a designated receiving office, with a designation of the particular countries of interest. Under the Treaty, an international search authority identifies documents (usually patents) which then are used by an international preliminary examination authority for reviewing the invention for requisite novelty and inventive step (nonobviousness). The application accompanied by the search and preliminary examination
reports is then forwarded to the patent office of each designated country, where it is examined according to that country’s procedures and laws. The applications in all designated countries are processed in parallel. The European Patent Convention. The European Patent Convention (EPC) provides a mechanism for filing a single application for a patent that is applicable throughout most European countries. A single English language application may be filed in the European Patent Office (EPO), designating various member countries (limited to European countries) where the patent will apply. The application is examined by the EPO, and ultimately granted or refused in accordance with the law of the treaty. The rights and enforcement of the European patent in the various designated countries are governed by the laws of the individual country, but the validity of the patent is governed by the law of the Treaty. OVERVIEW AND COMPARISON OF THE BASIC MECHANISMS FOR PROTECTING ENGINEERING INFORMATION Trade secret, patent, copyright, mask work, and trademark protection represent the basic legal mechanisms for protecting intellectual property. In addition to describing the respective types of intellectual property assets, the applicability and scope of these various protection mechanisms will now be discussed, with an emphasis on the law in the United States. Trade Secret Protection Any engineering information that is not readily ascertainable from publicly available information can be a trade secret. Trade secret protection is typically employed in connection with information, data, and know-how and various types of inventions. Maintaining intellectual property as a trade secret protects the intellectual property in the sense that if the competition does not have access to the information, it cannot copy it, and will have to go to the trouble and expense of developing the information itself. Scope of Protection. Maintaining engineering information as a trade secret can protect the information for a potentially infinite period, that is, for as long as the information can be kept from becoming readily ascertainable from publicly available information. State law provides for trade secret rights that are enforceable against others, but to qualify for the protection, the information must not only in fact be confidential (not generally known or readily ascertainable from publicly available information), but also be subject to reasonable efforts to maintain its secrecy (e.g., restricted access, disclosed only under confidentiality agreements). Trade secret rights are enforceable only against: (1) entities under an obligation of confidentiality imposed by written agreement or implied by the circumstances (e.g., employer–employee relationship), (2) entities that gained access to the confidential information by improper means (e.g., through industrial espionage), and (3) in some cases, entities that obtained the information clearly knowing that it was a trade secret that had been improperly obtained by another. The primary disadvantage of trade secret protection is that it provides absolutely no protection whatsoever against an-
INTELLECTUAL PROPERTY IN ENGINEERING
other entity independently developing the engineering information, or using publicly available information to reverse engineer the trade secret information. Moreover, under the present law in the United States (1) it is conceivable that if the second entity independently develops the information, it could obtain a patent on the trade secret technology, foreclosing further use of the technology by the first entity. If a trade secret becomes generally known (irrespective of how it becomes known), trade secret protection is, as a practical matter, lost. As a consequence, some types of engineering information that are ascertainable from publicly available sources are totally unsuited for trade secret protection. For example, any engineering information embodied in a product that is sold to the public and can be reverse engineered simply cannot effectively be maintained as a trade secret. Absent some express or implied contractual obligation to the contrary, a party is under no obligation to maintain information in confidence. Generally speaking, reverse engineering is a commonplace and permissible activity. Utility Patent Protection Most engineering information embodied in a product can be ascertained by reverse engineering, and therefore is not suitable for trade secret protection. Accordingly, in the absence of legal mechanisms other than trade secret protection, there would be little incentive for an entity to make the necessary investment in research and development of new products, since the resulting technology (engineering information) would be at risk of being copied as soon as the product was placed on the market. The advancement of technology would be stifled. Recognizing this problem, the framers of the United States Constitution included a provision for federal patent and copyright systems. Article I, Section VIII of the Constitution provides: ‘‘The Congress shall have the power . . . to promote the progress of science and useful arts, by securing for a limited time to authors and inventors the exclusive right to their respective writings and discoveries.’’ Under this authority congress then developed a patent system for inducing an ‘‘inventor’’ to make the necessary investment of time and money in research, while at the same time ensuring that the work of the inventor would ultimately become available to the public. In the United States a patent can be thought of as an agreement between an inventor and the government. The inventor teaches the public how to use the invention. In return, the inventor is given the right to exclude the public from making, using, or selling the invention for a period of up to 20 years from the date that the application for a patent is filed. As will be discussed, the ‘‘agreement’’ logic is reflected in the structure of the patent grant. A patent grant typically is divided into three major parts: a drawing, a written description of the invention, and claims. The drawing and written description teach the public how to make and use the invention. The claims define the rights of the inventor. Requirements for Patentability. Patent protection is available in the United States for ‘‘any new and useful process, machine, manufacture or composition of matter, or any new and useful improvement thereof ’’ (2,3). An invention in any one of those categories is generally patentable if it is ‘‘novel’’ and not ‘‘obvious.’’ The categories of patentable subject matter
473
and the novelty and nonobviousness requirements are discussed below. Patentable Subject Matter. The categories of patentable subject matter are extremely broad and perhaps most easily approached by reviewing the relatively few things that have been determined to be outside of the statutory provision: Mere Printed Matter. A mere set of words is not patentable. However, patent protection might be available for embodiments of the concept described by the words or for the way paper is folded or perforated relative to printed matter. Methods of Doing Business. Patent protection is not available, for example, for an advertising gimmick, such as the idea of a two-for-one sale. Things Unaltered from a Natural State. For example, a rock taken unaltered from the earth is not, per se, patentable. However, a method of using the rock, a purified or otherwise altered version of the rock, or a mixture employing the rock, might be patentable. Abstract Information and Scientific Principles. Abstract information is not patentable. For example, an entity cannot patent its name. Similarly, abstract principles, divorced from any physical structure (e.g., laws of nature) are not, per se, patentable. For example, Newton could not have patented gravity. As a general proposition, there is no question that electronic or mechanical apparatus, electronic systems, and components are patentable subject matter. Software inventions, however, are often subject to a special test. In the United States, a patent claim on a software invention is first analyzed to see if a mathematical algorithm (defined as a ‘‘procedure for solving a given type of mathematical problem’’) is directly or indirectly recited. If no mathematical algorithm is recited, the claim is directed to a patentable subject matter. If a mathematical algorithm is found, the claim as a whole is then further analyzed to determine whether the algorithm is ‘‘applied in any manner to physical elements or process steps.’’ If the physical elements or process steps are present, the claim is patentable subject matter under the patent statute and, if novel and nonobvious, is patentable. A similar, although not identical test is applied in the European Patent Office. Novelty. ‘‘Novelty,’’ as used in the patent statute, has a broader meaning than the normal English usage of the word. In essence, the statute defines particular circumstances where the invention is considered not to be ‘‘novel,’’ either to have already passed into the public domain, so as to become public property, or to have already become the property of another. The patent statute expressly prohibits obtaining a patent under these circumstances, referred to as statutory bars: Earlier Invention by Another. A person cannot obtain a patent on an invention if, before the person made the invention, someone else: (1) knew of or used the invention in the United States (other than in secret), (2) described the invention in a printed publication or patent application, or (3) made (and did not abandon, suppress, or conceal) the invention.
474
INTELLECTUAL PROPERTY IN ENGINEERING
Premature Public Use or Commercialization. An inventor can lose rights by, among other things, prematurely disclosing the invention, offering products using the invention, using the invention in a commercial process, or using the invention in public. The statute states:
elements of an invention as claimed are not disclosed in prior patents or publications, or if the invention as claimed combines known elements, but no prior patent or publication expressly or impliedly suggests combining those specific elements, the invention is probably nonobvious.
102(b) A person shall be entitled to a patent unless . . . the invention was patented or described in a printed publication in this or a foreign country, or in public use or on sale in this country more than one year prior to the date of the application for patent in the United States . . . United States Code Annotated, Title 35—Patents, 102(b)
The Patent Grant. As noted above, a patent is typically divided into three major parts: a drawing, a written description of the invention, and claims. The drawing and written description teach the public how to make and use the invention. The claims define the rights of the inventor. The Drawing. The drawing, which may include a number of sheets and figures, depicts all of the elements (components) of the invention. Individual elements are identified with reference letters or numbers to facilitate unambiguous reference to those elements in the written description. A drawing may be omitted if a full disclosure can be made in the written description. The omission of a drawing occurs principally in connection with chemical invention, and is a rare occurrence when the subject matter of the patent is electronic in nature. The Written Description. The written description portion of the patent typically includes the following sections: a background, a summary of the invention, a brief description of the drawing, and a detailed description of at least one example (the preferred exemplary embodiment) of the invention. The detailed description identifies the individual elements with reference letters or numbers in the drawing. The written description must meet two primary requirements; it must be ‘‘enabling’’ and it must ‘‘set forth the best mode contemplated by the inventor of carrying out his invention.’’ To be ‘‘enabling’’ the written description must provide sufficient detail to enable any ‘‘person skilled in the art’’ (the average engineer, technician, scientist, or worker in the particular area of technology of the invention) to make and use the invention without undue experimentation. This requires only that the ‘‘person skilled in the art’’ be enabled to make and use something that works for its intended purpose. It need not be efficient or cost-effective. However, the ‘‘best mode’’ requirement precludes the inventor from ‘‘holding back’’ relevant information that is known at the time the application is filed, and teaching the public only how to make or use an inferior version of the invention. In practical terms, if, at the time the application is filed, the inventor believes that a significant advantage can be obtained by (1) performing a particular function or sequence of functions or (2) implementing a particular function in a particular manner or using particular components, the application should detail those preferences in order to meet the best mode requirement. The Claims. The claims of the patent are, in essence, onesentence definitions of the patented invention. Typically, more than one claim is submitted to define the scope of protection provided by the patent in alternative ways. An unauthorized device or process directly infringes (violates) the patent only if the device includes elements corresponding to each and every element of at least one claim. (Responsibility for an infringement can also be assessed for ‘‘contributory infringement’’ and ‘‘inducing’’ infringement, but each instance requires that there ultimately be a direct infringement.) The broader and less specific the terms of a claim, the broader the protection afforded by that claim. However, if the claim is written in terms that are too general, that is, the claim is too broad, and it reads on the prior art, it is invalid. The language
Even a single publication, offer for sale or public use more than one year before the patent application will preclude a valid patent. The United States, in effect, provides a one-year grace period from the sale, use, or publication, in which an inventor can claim his or her rights. Most foreign countries, however, do not provide any grace period whatsoever. Any description of the invention published prior to the filing of a patent application bars the inventor from obtaining patent protection in most foreign countries. Other Statutory Bars. Rights in the United States can be lost through abandonment, improvident filing of an application for patent in another country, deriving the invention from a different entity, or intentionally misnaming the inventors. Prior Act. The statutory bars thus define a body of prior knowledge, often referred to as the prior art, against which patentability is measured: information originating from other than the applicant that either is accessible to the relevant public at large, or is known to the applicant for the patent (whether or not generally accessible) prior to the date of invention, and information (whether or not originating with the applicant) that has been either (1) accessible to the relevant public at large, or (2) the basis of commercialization by the applicant for the patent, for too long a period (more than one year) before a patent application on the invention was filed. Obviousness. In addition, the statutes require the invention as a whole be ‘‘nonobvious’’ to a person of ‘‘ordinary skill in the art’’ (the average engineer, technician, scientist, or worker in the particular area of technology of the invention). A number of objective criteria are relevant in assessing the nonobviousness of an invention: (1) the scope and content of the prior art; (2) the ‘‘level of ordinary skill in the art’’ (typical education level in the pertinent area of technology); (3) the differences between the invention as claimed and the prior art; and (4) whether the invention provides unexpected results, fulfills a long-felt need, and/or is commercially significant. For purposes of assessing nonobviousness, engineering information which is owned by the same entity that owns the invention, and has not passed into the public domain (by virtue of e.g., publication, or commercialization), should not be considered. The nonobviousness of the differences is then measured against the general knowledge of practitioners in the pertinent area of technology at the time of the invention, considering the invention in total context and without hindsight. It is irrelevant that it might have been subjectively obvious to the inventor. As a practical matter, if one or more
INTELLECTUAL PROPERTY IN ENGINEERING
Seat
475
(a) at least one support member connecting the first leg to the second leg; (b) at least one support member connecting the second leg to the third leg; and (c) at least one support member connecting the third leg to the first leg.
Leg
θ Support member Figure 1. Inventor A’s ‘‘stool.’’
of the patent claims must, therefore, be drafted with the utmost planning and precision. It is permissible to have a number of different claims in the patent application. As a matter of practice, claims of varying scope, ranging from the most general to the most specific, are submitted. In this way, if it appears after the fact that some relevant piece of prior art exists which invalidates the broad claims, the other, more specific claims are not necessarily invalidated. In this manner, the inventor not only can obtain protection on the broad aspects of his invention, but also on the specifics of the particular product that is put on the market. Claims can be categorized as independent and dependent. Independent claims expressly set out all of the claim elements. Dependent claims incorporate other claims (parent claims) by reference and add elements or more specific detail. In order to infringe a dependent claim, an accused device must include not only elements corresponding to each and every element expressly set forth in the dependent claim, but also must include elements corresponding to each and every element of the claim incorporated by reference (and each and every element of any claim incorporated by reference by the parent claim, and so on). Assume, for example, that inventor A has obtained a patent on the ‘‘stool’’ shown in Fig. 1, with the following claims:
6. The apparatus of claim 1 including three legs, each leg being attached to the lower surface of the seat at one end and extending outwardly at an angle transverse to a line perpendicular to the lower surface of the seat. Claim 1 is an independent claim. Claims 2–6 are all dependent claims. For example, to infringe claim 4, an accused device must include not only first, second, and third legs, as expressly recited in claim 4, but also a support member connecting at least two of the legs together as called for in claim 2, and a seat maintained a predetermined distance above the ground by at least one of the legs as called for in claim 1. To accomplish the purpose of the patent system, it is important to promote improvements on inventions without denigrating the protection provided for basic inventions. For this reason a patent provides an ‘‘exclusive’’ (exclusionary) right to the inventor. That is, the patentee has the right to exclude others from practicing the invention. However, the patentee does not have the right to practice his or her invention if to do so would infringe the prior patent of another. Continuing the above example, assume that inventor B purchases a stool, and improves upon it by developing a back and a more stable support structure (adds a fourth leg), as shown in Fig. 2. Also assume that the improvements are not obvious, and are otherwise patentable, and that inventor B obtains a patent on the ‘‘chair’’ with the following claims: 1. Apparatus comprising: (a) a seat having upper and lower surfaces; and
Back
Seat
1. Apparatus comprising: (a) a seat having upper and lower surfaces; and (b) at least one leg cooperating with the lower surface of the seat and disposed to maintain the seat a predetermined distance above the ground. 2. The apparatus of claim 1 including a plurality of legs. 3. The apparatus of claim 2 further including at least one support member connecting at least two of the legs together. 4. The apparatus of claim 2 including at least first, second, and third legs. 5. The apparatus of claim 4, further including:
Leg
Support member Figure 2. Inventor B’s ‘‘chair.’’
476
INTELLECTUAL PROPERTY IN ENGINEERING
(b) at least one leg cooperating with the lower surface of the seat and disposed to maintain the seat a predetermined distance above the ground; and (c) a back extending above the seat. 2. The apparatus of claim 1 including four parallel legs. 3. The apparatus of claim 2, further including at least one support member connecting at least two of the legs together. 4. Apparatus comprising: (a) a seat having upper and lower surfaces; and (b) four legs, cooperating to maintain the seat a predetermined distance above the ground. Both inventor A and inventor B have patents. However, notwithstanding the addition of the back and four legs, inventor B’s ‘‘chair’’ still includes elements corresponding to each and every element of claim 1 of the ‘‘stool’’ patent. It includes a seat, and ‘‘at least one’’ leg (viz., four legs) cooperating with the seat lower surface and disposed to maintain the seat a predetermined distance above the ground. It likewise infringes claims 2, 3, and 4 since in addition to including all of the elements of claim 1, the chair’s four legs are also ‘‘a plurality of legs’’ (claim 2), and it includes a support member connecting at least two of the legs together (claim 3) and four legs also include ‘‘at least first, second, and third legs’’ (claim 4). Claims 5 and 6 would not be infringed. The support member of the chair does not connect the third to the first leg as called for in claim 5. Likewise, a chair with ‘‘parallel’’ legs would not infringe claim 6 of the stool patent; the legs of the chair do not extend outwardly at an angle transverse to a line perpendicular to the seat. Since inventor B’s chair infringes one or more claims of the ‘‘stool’’ patent, inventor B is precluded from making or using the chair in the absence of authorization (a license) from inventor A. On the other hand, while inventor A is free to make, use and sell the ‘‘stool’’ (assuming that no other patents cover the stool structure), inventor A cannot put a ‘‘back’’ on the ‘‘stool’’ without infringing claim 1 of inventor B’s ‘‘chair’’ patent. Similarly, inventor A cannot increase the number of legs on the stool to four legs without infringing claim 3 of the ‘‘chair’’ patent. In theory, to meet market demand for chairs and stools, inventors A and B could each obtain licenses from the other under the respective patents, and, where there was initially only one ‘‘stool’’ manufacturer, there would be two ‘‘chair’’ manufacturers. Although in reality other factors may, of course, come into the picture, the basic model is applicable to many situations, particularly when the improvement patent is obtained considerably later than the basic patent. Obtaining a Patent. In general, the process of obtaining a patent is initiated by filing an application with the appropriate governmental authority. In the U.S., that authority is the Patent and Trademark Office (PTO) of the Department of Commerce. The application is, in effect, a proposed patent including a drawing, a written description and one or more claims. The application is then examined and prosecuted before the PTO. During the examination and prosecution of the application, the PTO determines whether the application is in
proper form, and the exact scope of the claims to which the inventor is entitled, if any is negotiated. The application is first received by the PTO, then assigned a serial number and accorded a filing date, classified as to subject matter and ultimately assigned to a Patent Examiner with special expertise in the relevant field of technology. The examiner reviews the application for formalities, and conducts an investigation, searching the PTO files of prior patents and literature to determine if there is any relevant prior art. The results of the review and investigation are then reported in what is known as an office action. In brief, the office action lists all of the references (prior art) considered by the Examiner and indicates whether or not the Examiner considers the application to be of proper form, and the claims to be anticipated by or rendered obvious by the references. A response to the office action (assuming one is necessary) is then filed, answering each and every issue raised by the Examiner by traversing (arguing against) the Examiner’s positions, amending the claims, or canceling the claims (that is, accepting the Examiner’s rejection). It must be stressed that it is the language of the claims that is controlling in arguing against a rejection; differences between the references cited and the preferred exemplary embodiment described in the specification are not controlling. The response must explain to the Examiner how the specific language of the claims is distinguished from the references. The claims, however, can be amended to include any detail described in the specification. The specification, on the other hand, cannot be amended, except for nonsubstantative editorial changes that do not introduce new matter into the disclosure. After the Examiner and the applicant agree with respect to the exact scope of the claims, the application is given a patent number, an issue date, and issues on that date as a patent. The term of a new patent is twenty years from its filing date. Prior law still provides older patents a term of seventeen years from issue date. Patent Filing Timing Strategies. The point in time during the development cycle of a product at which a patent application is filed can control the level of disclosure necessary in the written description. The applicant is not required (and in fact is not permitted) to add new matter to the application once it has been submitted to the PTO. The required detail can be minimized by filing an application early in the development cycle. If an application is filed when the basic elements of the invention have been established, but before details of implementation have been determined and there is no particular preferred embodiment of the invention, the additional detail necessary to meet the best mode requirement over and above that necessary to meet the enablement requirement is minimal. Disclosure of later developed preferences, details of implementation, and improvements can thus be avoided, and these matters can be maintained as trade secrets, if not evident from the patented item as marketed. However, if patent protection specifically directed to those later developed preferences, details of implementation and improvements is to be obtained, a new application will have to be filed. Once issued, an inventor’s own patent may be cited as a prior art reference by a Patent Examiner against a later filed application by the same inventor for a patent directed to further improvements. Accordingly, care must be taken to ensure that the later application is not delayed until the earlier
INTELLECTUAL PROPERTY IN ENGINEERING
patent becomes prior art under the laws of the jurisdiction where the application is to be filed. Copyright Protection To the extent engineering information qualifies as a ‘‘work of authorship,’’ it is amenable to copyright protection in most countries. In the United States (4), works of authorship include engineering writings and other works, for example, drawings, documentation, manuals, blueprints, and at least the literal aspects (code) of computer programs. Of course, works of audio, visual, and literary art are covered as well. However, the statute specifies that copyright does not extend to ideas, methods, systems, mathematical principles, formulas, and equations. Under the Copyright Act of 1976, which became effective on January 1, 1978, a copyright is automatically created as soon as an ‘‘original work of authorship’’ becomes ‘‘fixed’’ in a tangible form of expression (e.g., a form that can be perceived either directly or with the aid of a machine or device). Neither publication nor registration is necessary to secure copyright protection. In addition, as of March 1, 1989, the United States became a signatory to the Berne Convention for the Protection of Literary and Artistic work, and most of the earlier formalities involved in securing and maintaining a copyright in the United States have been relaxed. Registration remains necessary, however, for a US national to bring an infringement action and for certain rights (recovery of statutory damages and the right to recover attorneys fees as costs), and the use of a copyright notice remains desirable to deter copying and to avoid claims of innocent infringement. In general, the individual that creates a work is considered the author for copyright purposes. However, if the work is prepared by an employee within the scope of his or her employment, the work is a work for hire and the employer is considered the author (and owner) of the copyright. Under certain circumstances, a work prepared by a nonemployee can also be a work for hire. To qualify as a work for hire, the work by a nonemployee must fall within certain categories (contribution to a collective work, part of a motion picture or audiovisual work, translation, supplementary work, compilation, instructional text, test or test answers, or atlas) and the parties must expressly agree in writing that it will be a work for hire. The term of a copyright for works after January 1, 1978, is the life of the author plus 50 years if created by an individual, or if a work made for hire, the earlier of 75 years from publication or 100 years from creation. For works of individuals published before January 1, 1978, the term was 28 years, renewable for a second term of 28 years and subject to possible extension for a total of 75 years. Scope of Protection. The owner of a copyright on a work is granted a number of exclusive rights: to reproduce the copyrighted work; to prepare derivative works based upon the copyrighted work; to distribute copies of the copyrighted work to the public by sale, rental, lease, or lending; and to publicly display or perform the copyrighted work (in the case of musical, dramatic, and similar types of works). However, this exclusive right is limited by the doctrine of fair use and, in certain instances, archival rights and compulsory licensing.
477
A copyright affords a relatively limited protection. A copyright protects only the expression of ideas, not the ideas themselves. For example, with respect to a computer program, a copyright protects only the author’s specific expression and does not extend to the underlying idea or concept of the program or to aspects of the program that are dictated by function. Such a copyright is infringed by the copying of substantial portions of the program code. However, independent development of the program is a complete defense. Whether or not other aspects of the program (such as its modularity, structure, sequence, and organization) are protected by the copyright depends upon whether the particular aspect that is copied is categorized as idea or expression. Notice. While a copyright notice is no longer necessary on works published after March 1, 1989, all published copies should bear a notice of copyright when a work is published. Authors planning to submit their original work to an editor or publisher should add a copyright notice on the work. Use of proper copyright notice effectively precludes an ‘‘innocent infringement’’ defense. Basically, the notice of copyright includes three elements: the copyright symbol, the word ‘‘copyright,’’ or the abbreviation ‘‘Copr.’’; the named owner of the copyright; and the year of first publication of the work. If the work is not yet published, the year of first publication is replaced with the word ‘‘unpublished.’’ Registration. Except in the case of publication without a proper copyright notice on works first published before March 1, 1989, copyright registration is not a prerequisite for copyright protection. However, registration is significant in three respects: (1) a registration is normally necessary before the copyright on works of U.S. origin can be enforced in court; (2) if the registration is made before publication or within five years of publication, the mere fact of registration establishes the validity of the copyright and of the facts stated in the copyright certificate in court; and (3) the copyright statute provides for ‘‘statutory damages and attorneys’ fees’’ which may range, according to the circumstances, from $500 (in the case of an innocent infringer) to up to $100,000 (in the case of a willful infringer). Trademark Protection In the United States, common law trademark rights are acquired immediately upon use of the mark (word, symbol or nonutilitarian aspect of a product used to distinguish an entity’s products or services from those of other entities, that is, trademark, service mark, or trade dress) in legal commercial transactions. In general, the first to use a given mark in connection with particular goods or services in a given geographical area (e.g., a particular state) obtains the exclusive right to the mark for use with those goods in that area. However, another entity who subsequently adopts the mark in a remote geographical area (e.g., a distant state), without knowledge of the prior use of the mark by the first entity, will acquire valid common law rights to the mark in the remote area. However, once the mark is used in commerce (or a bona fide intent to use the mark in interstate commerce is formed), a federal registration may be obtained. Registration of a mark on what is known as the Principal Register of the patent and trademark office provides constructive notice of the registrant’s claim of
478
INTELLECTUAL PROPERTY IN ENGINEERING
ownership of the mark (i.e., has the same effect as actual knowledge of the mark). This prevents entities in remote geographical areas from subsequently adopting and obtaining rights in the mark. In general, a proper mark that has actually been used (sold or transported) in interstate commerce (‘‘commerce which may lawfully be regulated by Congress’’) and is not ‘‘confusingly similar’’ to a mark already being used by another is eligible for registration on the Principal Register of the Patent and Trademark Office (5). An application for registration of a mark may be filed prior to actual use on the basis of a good faith intention to use a mark in commerce. To obtain a registration, however, actual use of the mark must be initiated within a predetermined period (six months, extendable to one year upon request, and up to 24 months for good cause) beginning on notice from the Patent and Trademark Office indicating the mark is otherwise entitled to registration. A trademark application must specify a description of the goods and services in connection with which the mark will be used and list the classifications in which application for registration is being made. During prosecution of a trademark application, the description of goods and services cannot be expanded. An application based on an intent to use will establish constructive use of the mark as of the filing date of the trademark application with effect only after actual use is commenced and a registration is issued. Filing trademark application creates trademark rights for use of the mark for the specified goods and services that will prevail over later filed intent-to-use applications for goods or services for which the marks are confusingly similar, but not over the rights of others that (1) actually used the mark to the filing date; or (2) that can claim an earlier treaty priority date based upon a corresponding foreign trademark application. After approval by the PTO, marks are published for opposition by anyone who believes that the registration of the mark may cause damage, usually a prior user of a similar mark. Opposition proceedings are conducted by the PTO as administrative trials to determine whether the applicant is entitled to the registration. Mask Work Protection A special form of protection is available for semiconductor chip products. Under the Federal Semiconductor Chip Protection Act, mask works are defined as a series of related images, however fixed or encoded, that represent three-dimensional patterns in the layers of a semiconductor chip. Protection is available for any mask work unless: 1. the mask work is not original (that is, the mask work was copied), 2. the mask work consists of designs that are ‘‘stable, commonplace, or familiar in the semiconductor industry, or variations of such design, combined in such a way that, considered as a whole, are not original’’, or 3. the mask work was first commercially exploited more than two years before the mask work was registered with the Copyright Office.
lutely clear that competitors are not precluded from reverse engineering the chip for purposes of analysis or from using any (unpatented) idea, procedure, process, system, method of operation, concept, principle, or discovery embodied in the mask work. NEED FOR KEEPING ACCURATE RECORDS There are a number of instances when it becomes necessary to prove the date and nature of technical activities, and the project with which the activities are associated. For example, such proofs are often determinative in: disputes regarding ownership of technology—whether certain technology was first made under a particular development contract or Government contract; disputes regarding whether particular technology is covered by a particular license agreement; disputes regarding whether certain technology is subject to a confidentiality or nonuse agreement; proving, as a defense to patent infringement, prior development of an invention that was not abandoned, suppressed, or concealed; and interference proceedings before the Patent and Trademark Office— contests to determine priority of invention. The act of inventing involves two steps: conceiving the invention (technology), then reducing the invention to practice. Conception is basically the mental portion of the inventive act. Reducing the invention to practice is, in basic terms, building the invention and proving that it works for its intended purpose. The filing of a patent application is considered to be a constructive reduction to practice. The diligence with which an inventor works to reduce an invention to practice, after it has been conceived, should be documented with accurate records. As a general proposition, if inventor A was both the first to conceive and the first to reduce the invention to practice, inventor A will be deemed the first to have ‘‘made’’ the invention. However, if inventor A was the first to conceive the invention, but the last to reduce the invention to practice, inventor A will still be deemed first to have made the invention if ‘‘diligence’’ in pursuing the reduction to practice can be proven from a time period prior to the conception of the invention by inventor B. However, if inventor A cannot prove reasonable diligence in pursuing the reduction to practice beginning with a date before inventor B conceived the invention, inventor B will be deemed first to have made the invention. As a general proposition, each aspect of the two-step process of making an invention must be proven by more than just the word of the inventor. The word of the ‘‘inventor’’ (or even coinventors) as to when and where an invention was conceived or reduced to practice is essentially ineffective without corroboration. Corroboration can be in the form of dated documents, engineering notebooks, drawings, ‘‘write once’’ media (e.g. optical disk), time records, and oral testimony by noninventors. Consideration should also be given to the evidentiary value of records. For example, records stored on magnetic media that are easily altered has less evidentiary value than records stored on ‘‘write once’’ media, or that have been escrowed to ensure integrity. CONCLUSION
In essence, the Semiconductor Chip Protection Act protects against the use of reproductions of mask works in the manufacture of competing chips. However, the Act makes it abso-
The various mechanisms for protecting engineering information are not necessarily mutually exclusive. In many cases,
INTELLIGENT BIOSENSORS
different forms of protection can be used concurrently to protect different aspects of a given product. With a little bit of forethought, and accurate and complete records, inadvertent loss of rights can be avoided and an appropriate strategy using each of the various protection mechanisms to its best effect can be developed. BIBLIOGRAPHY 1. Unitran Trade Secrets Act 14 U.L.A. 286 (Supp. 1987). 2. United States Code Annotated, Title 35—Patents. 3. M. A. Lechter, Successful Patents and Patenting for Engineers and Scientists, Piscataway, NJ: IEEE Press, 1995. 4. United States Code Annotated, Title 17—Copyrights. 5. United States Code Annotated, Title 15, §1051 et seq.— Trademarks. Reading List M. F. Jager, Trade Secrets Law, San Francisco: Clarke Boardman Callaghan, 1985. M. A. Lechter, The Intellectual Property Handbook, Phoenix, AZ: Techpress, 1994. M. A. Lechter, Successful Patents and Patenting for Engineers and Scientists, Piscataway, NJ: IEEE Press, 1995. M. B. Nimmer, Nimmer on Copyright, New York: Matthew Bender, 1996. E. P. White, Licensing, A Strategy for Profits, Alexandria, VA: Licensing Executives Society, 1997. D. S. Chism, Chism on Patents, New York: Matthew Bender, 1997. J. T. McCarthy, McCarthy on Trademarks and Unfair Competition, San Francisco: Clark Boardman Callaghan, 1997.
MICHAEL A. LECHTER Squire Sanders & Dempsey
479
Abstract : Intellectual Property in Engineering : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience
● ● ● ●
My Profile Log In Athens Log In
●
HOME ●
ABOUT US ●
CONTACT US
Home / Engineering / Electrical and Electronics Engineering
●
HELP ●
Recommend to Your Librarian
Intellectual Property in Engineering
●
Save title to My Profile
●
Article Titles A–Z
Standard Article
●
Email this page
●
Topics
●
Print this page
Wiley Encyclopedia of Electrical and Electronics Engineering
Michael A. Lechter1 1Squire Sanders & Dempsey, Phoenix, AZ Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7021 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (121K)
Abstract The sections in this article are The Different Types of Intellectual Property Overview and Comparison of the Basic Mechanisms for Protecting Engineering Information Need for Keeping Accurate Records Conclusion
About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5614.htm15.06.2008 20:23:39
Browse this title
Search this title
●
Advanced Product Search
●
Search All Content
●
Acronym Finder
INTERNATIONAL COMMUNICATION
611
Union, the creation of powerful free trade zones first in Europe and then in North America and South America, the rise of a global economy, the abundance of international telecommunication systems, and the refinement of powerful, low-cost communication technologies. These changes provided both the motivation and the means for countries, companies, and individuals to communicate across national boundaries. Nonetheless, international communication poses many challenges. Those who wish to communicate across borders may have to deal with language differences and with several other important cultural differences. These differences may involve ideas about status, formality, directness, the separation or merging of business and private relationships, the goal of harmony versus honesty and candor, the degree to which truth is derived from absolute principles or is related to specific circumstances, the importance of individuals versus the importance of groups, the value of oral versus written communication, and how much shared background knowledge or context should be included in a document or presentation. In addition, the relationship of corporate culture to all of the national or ethnic cultures impinging on a specific situation may complicate communication processes. LANGUAGE ISSUES IN INTERNATIONAL COMMUNICATION Persons from countries with the same official language, such as Venezuela, Mexico, and Argentina, may be able to use their native language to communicate with colleagues across borders with very little difficulty; they may find that documents and linguistic conventions vary only slightly. However, in many cases communicators in different countries will not speak the same native language. Which language they choose for international communication may depend on international agreements or on corporations’ choices rather than personal preferences. By international agreement, English is the official language of air traffic control and naval communication: Pilots landing in Moscow and Milwaukee speak in English, as do ship captains everywhere. In circumstances not covered by treaty or national law, international communication may be conducted in only one language or a combination of languages. SIMPLIFIED LANGUAGE SYSTEMS
INTERNATIONAL COMMUNICATION It is increasingly common for engineers and scientists in one country to send proposals, drawings, or reports to their counterparts in other countries and to collaborate with them in the preparation of documents. These actions are examples of international communication, which can be defined as the transmission of verbal and graphical messages between individuals or companies affiliated with different countries. International communication may occur face to face or by means of various media. Cultural rather than legal influences usually distinguish international communication. Several developments in the last two decades of the twentieth century increased the need for international communication: the emergence of new states after the fall of the Soviet
To accommodate international audiences who have limited knowledge of a specific language needed in the situation, writers or speakers sometimes communicate information with a simplified language system (1,2). Basic English, developed by C. K. Ogden (3–5), is a widely used means of preparing technical documents for international audiences with limited fluency in English. Basic English has a select vocabulary of 850 words representing crucial concepts. Similarly, International Scientific Vocabulary, a list adopted in 1959 for use in the sciences and other specialized studies, contains words or other linguistic forms current in two or more languages. English is spoken by only 750 million of the world’s 6 billion inhabitants (Chinese is spoken by the largest number); but a high proportion of the world’s technical specialists and engineers use English (6), making Basic English and International Scientific Vocabulary suitable for communication among technical professionals. Nevertheless, Basic English
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
612
INTERNATIONAL COMMUNICATION
has some limitations and is not especially easy for native speakers to learn to use (7,8). TRANSLATION Translation is another approach to overcoming language differences. Documents written in one language may be translated into another language or into several, making them available to a wide range of readers, just as some sites on the World Wide Web present the same information in two or more languages. Translation is not always fully accurate. Source languages sometimes contain words for which no equivalent can be found in the target language. Some words may have several meanings, and a translator may select the wrong meaning, introducing connotations the author did not intend. For example, in English a ‘‘seasoned engineer’’ means one who has long or varied experience. The phrase draws metaphorically on one meaning of ‘‘season,’’ ‘‘to dry freshly cut wood and prepare it for long-lasting use.’’ However, a translator once rendered this phrase in Spanish as a ‘‘spicy engineer,’’ which drew on another meaning of the verb ‘‘season’’: ‘‘to add spices or herbs to food for flavor.’’ New resources for translation are making it easier for engineers and scientists to communicate despite being located in different countries and speaking different native languages. On-line dictionaries can be consulted easily. The LOGOS dictionary at http://www.logos.it/query/query.html is an on-line, freely accessible database containing nearly 8 million entries (total for all 30 languages) produced by a network of professional translators and contributors to the site. The on-line Beilstein Dictionary (German/English) at http://wwwsul.stanford.edu/depts/swain/beilstein/bedict1.html enables readers who are not native speakers of German to read successfully the Beilstein Handbook of Organic Chemistry. With a total of about 2100 entries, the dictionary alphabetically lists most German words and common abbreviations occurring in the handbook with their English equivalents. Voice entry software and an automatic translation service, now available free at the Altavista site http://babelfish.altavista.digital.com, are only two of many recent innovations likely to offer help to international communicators in the future.
tween cultures. Communicators in a single country tend to share the same assumptions about communication situations, genres, relationships with audiences, politeness, roles of participants, timing of communication events, aesthetics, graphical conventions, and style. Each of these assumptions may control several decisions in communication processes and may interact in complex ways. For example, ideas about politeness and respect for status are likely to combine with ideas about graphical conventions and audiences to influence document design, vocabulary (or register), style, type treatments, types of illustrations, choice of colors, organization of content, and suitability of content. Culture has been defined many ways, but most of these definitions involve either (1) the objects or actions produced by a group that shares values and beliefs or (2) the mental predispositions acquired through socialization and instruction (9,10). Although either kind of definition has advantages, both can be useful in studying international communication, which involves both communication products (such as documents, presentations, graphics, and conversations) and communication processes (rules for who may speak, ways of organizing information, and attitudes toward communication actions).
INTERACTION OF NATIONAL, REGIONAL, ETHNIC, AND CORPORATE FACTORS Although countries’ institutions as well as informal groups may foster particular national identities through their visual symbols and documents, a national culture is not shared uniformly by everyone in a nation. A country’s population may include groups of people of many different regional and ethnic heritages, and the popular concepts of national culture may not match any of these groups’ practices exactly. In the United States, regional cultures differentiate Texans from New Yorkers as much as ethnic cultures do. Therefore communicators cannot adopt the conventions of one group or the other blindly; it is wise to consider what one knows about the person(s) who will receive and respond to a communication as individuals with unique characteristics, as well. The preferences of the powerful and wealthy usually enjoy prestige, but ethnic and corporate cultures may also shape document design and audience expectations.
EFFECTS OF NATIONAL LAWS ON COMMUNICATION National laws affect some aspects of international communication, such as forms of envelope address, means of transmission, costs of postage, delivery, taxation, or size of documents. National laws may also limit the topics that can be discussed. For example, governments may prohibit the dissemination of some types of technical information for reasons of national security or other policies (classified information or national defense secrets, for example). Some countries, such as Iran and the People’s Republic of China, forbid citizens to use some forms of communication, such as the World Wide Web, or require that they use state-controlled networks. EFFECTS OF CULTURAL DIFFERENCES The more significant differences between national (domestic) and international communication arise from differences be-
INFLUENCE OF CORPORATE NORMS AND POLICIES Corporations sometimes influence cultural norms for communication by their document review systems and the ways they enforce communication norms of the headquarters’ corporate culture. For example, all reports to be distributed internationally may be reviewed and issued by an individual from the corporation’s home country, or all documents and communications may be produced in one language only. Western companies have been accused of imperialistic hegemony by insisting on the use of English. However, the choice of language may or may not be imperialistic, according to Weiss (11). Traditionally, languages were spread by means of military conquest. In the last quarter of the twentieth century, the principal reason for the spread of English has been the lure of participation in the global economy. English has, to its credit, often been the language of democratization and revolution,
INTERNATIONAL COMMUNICATION
although serious exploitation still may occur in international business through linguistic and other means (12). CORPORATE POLICIES ABOUT INTERNATIONAL COMMUNICATION The strength of the corporate culture and the degree to which it identifies with a national culture will affect the degree to which it accommodates the local practices of its employees abroad and clients in other cultures. A foreign company may choose to (1) use the native language and communication practices of its home country in all locations worldwide, (2) use the local language for documents and conversations in offices abroad but translate corporate-level communications into the home country languages, or (3) allow each office to tailor its communication practices to the local customs. While the third option makes it easy to interact with local people, it may impede coordination among offices in various countries and reduce the likelihood that managers from the home country headquarters can communicate with local staff easily, and it tends to reduce the overall efficiency of the multinational firm. Corporate executives must decide at what level the greatest opportunities lie and where the communication problems will be the greatest, matching corporate strategy to international communication strategy, according to Leininger (13). WHERE TO FIND HELP Government departments, organizations, translation companies, consultants, on-line resources, and printed materials now make it easier to begin international communication. International groups’ departments (such as the North American Free Trade Agreement Secretariat) (14), national government agencies (such as the US Department of Commerce), organizations such as the Business Council for International Understanding (http://www.bciu.org/), and chambers of commerce all over the world now have web sites that can be identified with search engines on the World Wide Web. Many translation companies have staff from a variety of countries who can give advice on adapting a communication for a particular international audience in addition to translating the words of an original document. Free on-line translation services can translate simple messages destined for speakers of another language (but these services may make serious errors when doing word-for-word translation). Consultants and local representatives will provide (for a fee) advice on initiating contacts and negotiating abroad. International trade associations such as the International Air Cargo Association can provide advice on how to avoid problems (15). International lawyers specialize in reviewing documents and preparing them for use in other countries. Handy printed reference books such as Merriam Webster’s International Business Communications (16) are packed with details on salutations and closing phrases in letters, how to send faxes, how to address envelopes, how to place international telephone calls, and how postage is calculated. These details can make the difference between success and interminable delays. However, it is important to avoid adopting without question broad stereotypes in the ‘‘do’s and taboos’’ books that necessarily simplify in covering many countries (17).
613
THE DEMAND FOR INCREASED INTERNATIONAL COMMUNICATION Political and Economic Changes Create New Relationships Many political, economic, and technological changes in the last two decades of the twentieth century created a demand for international communication. As reform movements swept the Iron Curtain countries in 1990, Communist governments in Poland, Hungary, and Czechoslovakia were ousted. Communist East Germany dissolved and became part of the Federal Republic of Germany. Estonia, Latvia, and Lithuania were granted independence, and on December 21, 1991, the USSR formally ceased to exist as 11 of the 12 remaining republics agreed to form the loosely defined Commonwealth of Independent States. Because economic problems had precipitated these political changes, leaders in the new states sought assistance and development funds. The introduction of a capitalist economic system and models of doing business required communication with many people, companies, and governments formerly barred by the ‘‘Iron Curtain’’ of the USSR’s boundaries and policies. Further, the end of the arms race between the two super powers, the United States and the Soviet Union, promoted new confidence that money formerly devoted to military preparedness could be diverted fruitfully to trade and nonmilitary activities. New international ties were formed and international communication increased, although the process was fraught with difficulties. International Trade Agreements The Treaty on European Union (Maastricht Treaty) formed an organization of most of the states of western Europe, the European Union (EU). The success of the liberalized trade policies sponsored by the EEC (or EC, 1957–1991) in the 1960s, 1970s, and 1980s made EEC members more receptive to greater integration of the EC. In 1987 they reached an agreement to lift exchange controls and create a unified, freetrade market in western Europe that would permit banking, insurance, securities, and other financial services to be offered throughout. The prospect of a huge, unified European market prompted multinational companies to establish businesses there and motivated other governments to contemplate forming other free trade agreements elsewhere. Although the US Congress bitterly debated the provisions of the North American Free Trade Agreement (NAFTA) for a prolonged period, it endorsed the agreement, which began to be implemented in January 1994. NAFTA was designed to eliminate tariffs completely over time and remove many of the nontariff barriers, such as import licenses, that had helped to exclude US goods from the other two markets, especially Mexico. NAFTA also eased cross-border services rules, which ensured that US companies did not have to invest abroad to provide services in Canada or Mexico. The resulting trade increases necessarily involved many new business relationships and much international communication: According to the US–Mexico Chamber of Commerce, Mexican exports to the United States increased between January 1994 and December 1997 from 49.49 billion to 85.83 billion; US exports to Mexico over the same period increased from $50.84 billion to $71.38 billion (18).
614
INTERNATIONAL COMMUNICATION
Another spur to international communication came from the four-country customs union Mercosur, established originally in 1995 among Argentina, Brazil, Paraguay, and Uruguay. Mercosur has a combined market of more then 200 million consumers and a shared GDP of $1 trillion; it has tripled trade flows in the region in a little over 2 years. A free-trade agreement including all of the Americas is a long-term goal. Latin America and the Caribbean are home to roughly 475 million people and have a total domestic product of approximately $1.3 trillion. The European Union’s plans to establish a free-trade agreement with Mexico are likely to intensify international trade and international communication as well. In addition to the international communication conducted directly by government representatives, such events prompt enormous amounts of press coverage and discussion within and between large multinational firms, all of which may have an international character. Interdependency of International Business and Communication Technologies International communication scholar Deborah Andrews calls information systems and technology ‘‘the essential connective tissue’’ of international organizations (19). Without such systems, decision-making essential for inventory management, marketing, strategic planning, and customer contact would be impossible or would become prohibitively expensive. The communication technologies and systems that support the global economy developed steadily decade by decade. In the first half of the twentieth century, international telegraph services and radio broadcasts transmitted by cable provided individuals with personal opportunities for international communication and access to information about world events. National broadcasting systems such as the British Broadcasting System (BBC) and the Voice of America (VOA) not only made people around the globe familiar with British and US values and ideas, they established English as a global language and encouraged desire for Western products, creating similarities among customers across the globe despite many other cultural differences. Television surpassed radio as the international public broadcast medium by the time Queen Elizabeth II of England was crowned in Westminster Abbey in 1953, and the event was broadcast by television worldwide. Communications satellites support international communication by providing telephone, television, and data services between widely separated fixed locations and from fixed locations to mobile users. Proposed in 1945 by British scientist Arthur C. Clarke and advanced by the work of American engineer and scientist J. R. Pierce, earth satellites received their first experimental test in 1958 with the launch of a US satellite. During the 1960s, satellites were improved and developed, and in 1964 the International Telecommunications Satellite Organization was formed. Under its authority the first commercial communications satellite was developed to provide high-bandwidth telecommunications service between the United States and Europe as a supplement to the existing transatlantic cable and short-wave radio links. Soon satellites were launched by groups from many nations, and by the early 1970s the United States had adopted an ‘‘open skies’’ policy that allowed recognized legal entities to develop and launch special-purpose satellites. Satellites became the major means of routing international telephone calls in the 1970s, but by the 1980s high-capacity optical-fiber links began to be used.
As the political rivalry of the Cold War played itself out, electronic communication capacities had been developed to make possible the international communication that would be necessary for the operation of large mulitnational companies and the shifts of funds and information in an global economy. Engineering as Global Profession By the end of the twentieth century, expanding commerce around the globe created new demands for engineering. Multinational companies rushed to establish operations in the EU or to expand operations around the world. Developing countries built plants, sponsored infrastructure projects, and sought foreign capital for resource development projects—all engineering-intensive. Large engineering companies merged to form giant multinational companies such as Halliburton, Kvaerner, and Bechtel corporations to seize these opportunities. Coordination in such companies and contact with clients and vendors promoted a dramatic increase in international and intercultural communication. The interaction between client firms and engineering companies often involved the formation of project teams consisting of engineers from several companies and countries. The character of international communication changed as teams communicated and writers working at multiple locations across the world collaborated on the preparation of documents with electronic technologies. Instead of communications being authored by an individual or group in one country for receipt by an individual or group in another country, international engineering communication often involved groups of people from several cultures using processes of collaboration to benefit similar groups in another company. ‘‘Groupware’’ (software programs that allowed individuals in several countries to work together over simultaneously or asynchronously over a network) facilitated this new kind of document preparation and decision-making (20). Widespread use of consultants in engineering projects also increased the amount of international communication as outside experts became involved in international projects. The employment of persons from many countries in the giant engineering firms made engineering an increasingly international and intercultural profession. INFLUENCES OF CULTURE ON COMMUNICATION PRACTICES Cultural Influences on Communication International communication expert David Victor (21) argues that international communication is predominantly intercultural communication. Drawing on the work of anthropologists such as Hall and Hall (22,23) and Hofstede (9,10), he recommends that individuals about to become involved in international communication become aware of cultural influences by looking at seven factors: language, environment/technology, social organization, conceptions of authority, differences in representing or omitting background information necessary for interpretation, nonverbal behaviors used in communication, and attitudes toward time. Language rates attention from the cultural point of view because of the way that particular vocabularies influence perceptions of reality. It also rates attention because fluency and comprehension also have relevance for media choices in international communication. A person who has low fluency may be much more able to inter-
INTERNATIONAL COMMUNICATION
pret a fax or e-mail message with the aid of a dictionary than to understand a fast-paced telephone conversation. Beliefs about the physical environment and technology may also affect communication. In rugged terrain, face-to-face meetings may seem too troublesome; telephone conversations or e-mail may be better. Some companies adopt new technologies much more readily than others. Second, although the firms creating electronic technologies, especially software, usually follow US notions of design, greater nationalism everywhere tends to valorize local traditions and to increase resistance to global standards. Processes of technology adoption vary: In some countries the latest technology may be given to the most prestigious manager; in others, executives may resist learning to use computers because it might signal that he or she did not need the status symbol of assistants or secretaries. Differences in status and gender may prohibit communication between some persons in a culture and prescribe paths for communication for others. Understanding to whom a message should be sent (and by which means) may affect whether the message is received at all. US expectations about women’s rights to job opportunities and equal treatment are not matched in most other countries. Women who are planning to work abroad or to communicate internationally should consider how to realign audiences’ expectations (23–25). A more comprehensive knowledge of gender differences can help everyone create more successful international communications. The researchers E. T. Hall and M. R. Hall introduced the term ‘‘contexting’’ to describe the way different national groups vary in their preferences for including in a document all the background information necessary to interpret a particular message correctly. The more the groups tended to rely on shared experience, the less likely they were to include all of that commonly held knowledge in their reports, letters, and memos. The more mobile the society and the less likely the society was to have a single shared, dominant culture, the more likely that all the necessary references and details would be included in the document, handout, or presentation. The Halls ranked countries on a continuum according to how much they relied on stored information versus transmitted information. The following list of cultures from various countries and regions illustrates this continuum, beginning with the countries that rely on information shared in common by all participants and moving forward to those that most prefer all information to be explicitly represented,: Japan, Arabic countries, Latin American, Italian, English, French, North American, Scandinavian, German, and Swiss–German. Countries that prefer to rely on shared knowledge or context are called ‘‘high context’’; those that prefer to include all the information in the document are called ‘‘low context.’’ People from high-context countries typically feel that documents from low-context countries tell them much more than they need, expressing a condescending or paternalistic attitude toward the audience. People from low-context countries typically feel that documents from high-context countries are sloppily or carelessly prepared and incomplete. Hypertext documents that allow readers to follow several paths through a body of information and to seek additional information as needed provide one possible solution to this discrepancy, provided that all parties are willing to use the technology. Differences in nonverbal behaviors may cause misinterpretations. In Western countries, looking another person straight in the eye is considered an indication of honesty, forthright-
615
ness, and trustworthiness. However, in many other countries, averting the eyes is a sign of respect and deference. The westerner trying to show trustworthiness by direct eye contact may insult an audience because of his or her presumed lack of respect. Posture, gestures, and how close people stand to one another can also cause discomfort and misperceptions about the other person’s intentions. Victor’s chapter on nonverbal communication offers a good review (21). Attitudes toward time control the pace of communication and differ widely from culture to culture. In Mexico, the individual is considered very important, being prepared is highly valued, and relationships govern decisions. As a result, being punctual has less significance than taking enough time to deal fully with a valued person’s needs or being completely ready for a conference. In contrast, US, German, and Swiss business people usually value punctuality or staying on schedule for its own sake. In contrast, in countries where time is thought of as something all around one, like a pool of water, the urgency of linear notions of time and efficiency are largely irrelevant. The pace of communication will be governed by other considerations. Negotiation about one’s feelings about time can be very helpful, because when individuals independently try to adopt one another’s attitudes toward time, mutual mistakes can occur. Discussing the matter discreetly in advance can clear the air. Regardless of the benefits of trade, greater interaction and differences in cultural preferences increase the probability of miscommunication. Preparing to understand the cultural dimensions of international business can prevent some of the most obvious errors and reduce frustration and misunderstanding (26). PROBLEMS RELATED TO INTERNATIONAL COMMUNICATION Ineffective international communication can have many costly results, especially in engineering and technical communication. Miscommunication can lead to different understandings of proposals, instructions, change orders, and design specifications. Differing attitudes toward time may result in costly delays. A casual tone or failure to present materials in the proper style may sabotage a relationship. Failure to anticipate differences in technologies may result in incompatible systems or mechanical problems on a project. Legal difficulties and infringement of international agreements could occur if documents are not reviewed by knowledgeable counsel. Poor international communication strategies can result in unnecessary costs if strategies do not match corporate mission and strategy: For example, translating every document into another language would be a mistake if the corporate strategy is to do no more than sell products through an agent. Selected labels, promotional materials, and installation and repair instructions would be the documents worth translating in that situation. HOW TO COMMUNICATE WITH INTERNATIONAL AUDIENCES An Overall Approach to the Process Six cultural dimensions, some of which have already been discussed under influences of culture on communication prac-
616
INTERNATIONAL COMMUNICATION
tices, are centrally involved in composing processes and document design in international communication, Tebeaux and Driskill (27) believe. Brief discussions of these elements and their implications for document design follow. Value of (Emphasis On) Either Individuals or Groups. Societies may emphasize either individualism or belonging to a group. For example, the United States prizes individual liberties; Japanese citizens see group membership as a principal source of identity. Letters to a business or person in a group culture should emphasize the relationship the writer is attempting to establish with the organization and deemphasize what the writer wants. Instead, a writer should stress how the business relationship between the two companies can flourish. It would be rather tasteless to single out individuals for commendation; speaking of ‘‘we’’ would be more appropriate than referring to what ‘‘I’’ want. In a group-oriented culture, correspondence may not include a greeting to a specific individual (not ‘‘Dear Mr. Chang’’ but the anonymous ‘‘Dear Sirs’’) because the communication is perceived to be between companies, not between persons. A conversational style in documents written to members of group cultures will foreground the sound of the message, rather than the visual clarity of the main issues. Courteous phrases, though formulaic, may nonetheless sound socially appropriate. Having a social voice is more important than having a personal or idiosyncratic voice, just as bowing correctly is more important than bowing with a flourish. To design a layout for a group-valuing audience, emphasize the identity of the organization: photos of groups and corporate landmarks will be more acceptable than photos of individuals (except for the president and chairman). Some document designs will not work for both types of cultures. Separation Versus Merging of Business and Private Relationships. Cultures that separate business and private relationships expect a formal and impersonal style, content focused on tasks or issues, and a severe or utilitarian layout. In contrast, cultures that emphasize relationships and merge business and private relationships often include personal observations and personal information in cover letters (though not in proposals or reports) and use a reserved but positive tone. Casual style and forms of address should be avoided as not offering sufficient respect to the reader. Indirect messages, as opposed to direct, to-the-point-messages, are usually appropriate. Borders, elegant serif fonts, embossing, crests, highquality paper, and centered headings are among the appropriate choices. Similarly coordinated designs for brochures and reports suggest belonging to a group. Using correct titles consistently shows that the writer knows everyone’s correct place in the group. Degree of Distance Between Social Ranks. In high-power-distance cultures, using correct forms of address can make a big difference to audiences. Take time to find out specifically to whom to address a report or letter, the title or rank of that person, which names to place on the distribution list, and what rank each decision-making individual holds. Establishing the correct tone in addressing the intended reader(s) and thus recognizing the appropriate distance needed between writer and reader(s) can establish the proper identity for the writer. Thus, tone in high-power-distance cultures may need
to be more formal if the reader holds a position that is relatively superior to the writer’s. Official formats, consistent graphic hierarchies, and communications designed for special occasions, such as commemorative scrolls, announcements, and commendations, will be appreciated. In low-power-distance cultures, strict recognition of business hierarchies and use of formal address gain less favor. Some corporate cultures emphasize low power distance by instituting casual dress and open-door policies and having a flat hierarchy in the organization. Some west-coast computer companies in the United States are known for this kind of culture. A wide variety of formats may be used, and informal layouts and casual typefaces will be acceptable. The style of the message can also be more casual. Universal or Relative (Particular, Situated) View of Truth. Some cultures tend to emphasize principles that should hold true everywhere. Theocracies (where a single religion dominates the government) are an extreme of this form. When writing or speaking in a universalist culture, one should be as specific and concrete as possible. Clarity and precision via format, diction, syntax, and usage will be valued. In argument structure, reference to agreed-on principles should lead to interpretation of specific details; alternative possibilities need not be given as much attention. If the deity is considered all-powerful, scheduling and planning may be deemphasized inasmuch as outcomes and events will be seen as dependent on the will of Allah or the preeminent deity. Drawings of normative cultural types and national symbols, especially timehonored ones, may be successfully associated with products or services rather than photos of specific individuals. Traditional layouts that may look busy to westerners will be acceptable. In societies where truth is conditional, dependent on specific situations, principles may be less convincing than the particularities of a plant, system, community, or problem. Organizing material chronologically to show the history of a situation or design will establish the truth of one’s claims better than indicating that it conforms to a national or international standard. Contexting. Some cultures expect that a message will include all necessary background information; in contrast, others may expect that a reader’s knowledge of matters not included in a document will make correct interpretation possible. Discussed above, this cultural property affects what is included in a document or presentation more than any other aspect of the communication. Time spent reading about and studying the history and background of a high-context culture in advance of communication will help a great deal. For example, reading the slim volume published in 1989 by Mitsubishi Corporation, Tatemae and Honne: Distinguishing Between Good Form and Real Intention in Japanese Business Culture, will make a newcomer in Japan familiar with many terms and concepts occurring in business (28). Predisposition Either to Accept or to Avoid Uncertainty. If readers are willing to accept uncertainty, problems can be named and explored, risks discussed, and options explored. In cultures that avoid uncertainty, documents are valued for documentation and governance purposes. Problems may remain implicit. (‘‘Better to let sleeping dogs lie’’ would be a telling adage in such a culture.) Forms, tables, and many ap-
INTERNATIONAL COMMUNICATION
pendices serve the needs of high-certainty-seeking cultures. Images associated with certainty (such as flowcharts that make outcomes clear), elaborate borders (the kind on stock certificates), high-quality paper, traditional layouts, precise physical images (such as photos, boxed quotations, or principles), and presentation techniques that look permanent (embossing, engraving, framing) will be valued. Each of these six cultural dimensions may combine in different sets in a particular society. Once the particular profile of a culture is reasonably clear, the associated design choices can guide the composing process at key points. Several other structured approaches have been formulated for applying the characteristics identified by anthropologists; often these are illustrated with the authors’ consulting experience and can be very helpful to persons with little experience (29–35). Presenting to International Audiences If at all possible, it would be good to find an opportunity to observe a presenter from the country or culture in which you intend to present. Presentation styles are affected strongly by attitudes toward power distance, individualism, rank, dependence on context for interpretation, and tolerance for uncertainty. In cultures that value groups over individuals, expect to present with averted eyes and body language that signals deference and respect. Stress nonbusiness topics unless a relationship has been well established, at which time specific information about a business project may be acceptable. Formulaic apologies for the inadequacy of one’s efforts will seem courteous and respectful. Keep to the time schedule, but do not emphasize or refer to time in the talk itself. In contrast, Western audiences expect direct eye contact, no apologies, clear organization that addresses decision-makers’ concerns first, problem recognition and analysis, and a task-oriented approach. Preparing Images and Graphics for International Audiences People from different cultures often do not interpret graphics in the same way. It is important to explain the graphic and label any icons or signage so that audiences will recognize these symbols. Deborah Bosley has shown that icons especially vary widely even for basic information (36). William Horton’s work, while generalizing perhaps too much, is worth consulting for problems in intercultural representation in technical documents and software (37). Kostelnick (38) reviews two approaches to cultural adaptation of designs. Graphics, style, and a variety of other issues in carrying out international communication can be found in collections of essays such as Andrews’ or Lovitt and Goswami’s (39,40) and special issues of the Journal of Business and Technical Communication and the Journal of Business Communication. ETHICAL ISSUES Ethical issues in international communication may be analyzed by several ethical frameworks: outcomes, intentions, and relation to codes or professional standards. The general injunction, ‘‘first, do no harm,’’ measures outcomes by the type of effects. Communication that leads to exploitation of international audiences, physical damage to users or equipment,
617
environmental harm (such as sale of insecticides or chemicals banned in the country of manufacture), or political dependency or loss of cultural heritage can be faulted by application of this criterion. Judged by intention, deliberate communication to mislead international readers by omission of detrimental information, false promises, or representation of out-ofdate technology as current or desirable would place blame with the sender. US standards for liability based on a failure to warn (rather than negligence or defective production) may apply when the manufacturer or communicator was in a position to have known about a hazard and failed to warn the user. When a product is inherently hazardous, the manufacturer is responsible for being able to show that the risks were recognized and weighted against the prospective benefits to the consumer. To meet this responsibility, manufacturers and engineers should maintain files within the company that documents its deliberations. Hazard communication is especially challenging since both words and pictures or drawings may not be interpreted by international audiences in the ways manufacturers or sellers intend. Professional engineers should consult the ethical codes of their professional engineering societies as well as their corporations in considering their responsibilities in specific communication situations. Most societies have ethical codes. The Institute of Electrical and Electronics Engineers code (41) can be found at http://www.ieee.org/committee/ethics/coe.htm, and a discussion list can be subscribed to at that site which allows users worldwide to discuss emerging problems and concerns. Corporate codes of ethics may also apply to particular situations. Legal considerations must always be taken into account, but should not be judged sufficient to encompass the full range of ethical issues at stake. Challenging ethical dilemmas usually develop when competing goods must be reconciled. Technical innovation may produce benefits that simultaneously have negative secondary impacts in some circumstances. Planning that takes into account long-term effects or secondary outcomes may minimize negative results and preserve positive intentions. Catastrophes can cause extreme hardship in developing countries and serious liability exposure for companies. Few people have discussed the broad question of how communicators should serve international, not merely national or corporate, interests. New approaches to the ethical dimension of communication, such as the one proposed by Schultz (42), have appeared but without focusing on international communication. To be international in one’s thinking about communication is to consider the effects of communication on all stakeholders regardless of national affiliation. Communicators should ask how they can re-envison the process of communication, their colleagues, their companies, and their research to contribute to a more just international community. THEORIES OF INTERNATIONAL COMMUNICATION Universalist Theories Beliefs about great commonalities undergirded early modern approaches to international communication. Universalist views of communication assume that all people share a common humanity; and this shared concern for family, survival, beauty, peace, and so on, provide the basis for cooperation
618
INTERNATIONAL COMMUNICATION
and communication. Similarly, some linguists, such as Matsumoto (43), are currently searching for linguistic universals in international communication. The challenge in this approach is to understand the interplay between context and universal linguistic features, as Yli-Jokipii attempts to do (44). C. K. Ogden’s work on language theory (3,4) posited a limited number of semantic universals in all languages. English required only 18 verbs to capture these key concepts, he argued, and his system of Basic English added 600 nouns, 150 adjectives, and a small number of ‘‘operatives’’ ranging from prepositions to modals with which Ogden believed it was possible to represent the meaning of some 4000 verbs used in English. With these basics, people who spoke other native tongues would be able to express these same fundamentals on which their own languages also rested. Social Science Theories International communication and intercultural communication are not yet well theorized despite the massive increase in the number of e-mail messages, reports, and proposals that speed across the globe. Theory lags behind practice, and practices change rapidly. The need for more helpful, useful theories is acute. Communication specialists have borrowed from anthropologists and sociologists, turning to Hall (22,23), Hofstede (9,10), and Trompenaars (45) for categories to describe foreign audiences. Victor’s International Business Communication (21) is compact but draws illustrations of anthropological categories of difference from a wide range of nations. Goodykunst and Kim’s collection (46), Readings on Communicating with Strangers, reviewed and applied social science research and scholarship for the benefit of communication instructors and specialists. Their approach, however, emphasizes the strangeness of the audiences involved. Similarly, Scollon and Scollon’s Intercultural Communication: A Discourse Approach (47) applies discourse analysis techniques used in linguistics to help students ‘‘overcome discourse barriers.’’ This conception of international communication as an assault against barriers, with its connotations of contest, obstruction, winners (and losers), unfortunately positions participants in combative rather than collaborative roles and raises ethical issues. The good things that have been borrowed from the social sciences have been applied nearly as far as they will go in explaining international and intercultural communication. Furthermore, virtually none of these theories can be used to explain cognitive processes or has integrated concepts of electronic technologies into its explanations. Work in computermediated communication (CMC) has been concerned with cultural but not intercultural differences (48–52). Labeling prospective audiences by categories identifies probable predispositions of cultural groups, but the actual processes of specific individuals remain beyond reach. In an era of rapid change, advice that constructs others on the basis of data collected months or even decades ago is more likely to lead to stereotyping than to understanding. A different sort of theory should be developed that attends more to mutual processes of adaptation, feedback, and adjustment, with and without technologies. Theories and models that account for corporate processes are also needed, such as Leininger’s, which shows the possible benefits of aligning corporate strategies and communication policies.
Postmodern Theories Limaye and Victor point out that the prevailing paradigm for intercultural communication research is linear and processbased and that reconfigurations of the Shannon–Weaver model have dominated communication research (53). The Shannon–Weaver model arose from investigations of World War II military communication difficulties and included elements prominent in battlefield situations: senders, message, and receiver—a sort of walkie-talkie model. Various models derived from this basic threesome have also displayed the medium and ‘‘noise,’’ a representation of both electronic or mechanical difficulties as well as other communication problems. No people with personalities or their cultural predispositions or expectations were included. This family of representations is a clearly appropriate model for those who are working on communication equipment, but it is inadequate for representing the multitude of factors that influence communication success. However, new experiments in theory have explored the relevance of Donald Davidson’s externalist philosophy, Thomas Kent’s paralogic hermeneutics, and the Taoist yin– yang principle to furnish new approaches to intercultural communication (54,55). Yuan, for example, focuses on what is static and what is changing (yin and yang) in each speaker’s behavior to describe how speakers from two cultures each adapt their own cultural styles in a transaction (55). By abandoning the anthropological classification systems, Weiss attempts to recast all communication as a process of translation, a move that brings him back (in some senses) to Linda Flower’s contention that at the heart of composing is a process of rerepresentation (56).
RESEARCH METHODS Empirical or Quantitative Studies In empirical or quantitative studies, researchers measure aspects of international communication such as frequency, channels selected, costs, systems used, efficiency of groups using particular software, number of documents produced, types of documents produced or transmitted, and communication paths. The general purpose is to determine what is happening so that outcomes may be predicted in the future and processes can be controlled. The resistance of cultural phenomena to quantification has made empirical approaches alone less suitable for international communication than in domestic or monocultural communication, where quantitative studies can determine group norms and establish genres and conventions more readily. One approach is consistent with the historic method of instructing others shows model letters for imitation. The researcher accumulates documents created by a single national group or by employees of a single national firm and compares their features in order to define, for example, the ‘‘German’’ credit letter or the ‘‘US’’ letter of request (44). Although this approach may reveal some conventions and genre features, the data are usually not placed in context, little is known about the situations in which the documents were written, and the ethnic background and history of the writers are not known. Since so many employees in companies today are drawn from many diverse groups, often deliberately, the writers in a French company may not be acting in accordance
INTERNATIONAL COMMUNICATION
with their own background. Corporate culture may be overriding most influences of ‘‘national’’ origin. Because the approach focuses on the document features, it may not succeed in identifying features were chosen, why what specific conventions mean to readers, or how the information is used in the situation. Ethnographic or Qualitative Studies In ethnographic or qualitative studies, the researcher becomes a participant observer of the discourse community for an extended period of time. The researcher accumulates copious notes, logs, interviews, and other evidence of what happened in a specific community in a particular period. Well established as a technique in anthropology, ethnographic research has been used to ‘‘bring back’’ the inside story on communication in foreign countries. However, the impartiality and even the ability of the outsider to understand what is going on have been problematic in proving the validity of the observations. Most important, the key reason for doing communication research is to be able to generalize on the basis of the observations and to teach insights to others. If the results of qualitative research are held to be applicable only to a single discourse community studied, then this method cannot fully accomplish the objectives of the broader community. Since new technologies facilitate and influence new communication practices, it will be especially important to find qualitative techniques that take less time to conclude research, otherwise, findings may be irrelevant to the changed technological settings and groups when the research is published.
ISSUES FOR FUTURE STUDY IN INTERNATIONAL COMMUNICATION In addition to the need for new theories and models of international communication, researchers need a new infrastructure for studying international communication and reporting results. The competition between companies and the remoteness of communication sites make it difficult for researchers to gain permission to observe or to obtain materials to study. Furthermore, when the researcher remains within a firm or location for an extended period, he or she tends to become a member of the organization or discourse community and to be pressured for assistance in producing successful communications. This kind of participation jeopardizes the research by contaminating or modifying the process observed. Nonetheless, the rapid rate of technological adoption and innovation can quickly make obsolete what has been taught for a long time. For example, e-mail has become quite ubiquitous without receiving much attention from textbooks, which have continued to stress the primacy of printed documents and correspondence. Researchers need better access, quicker turn-around time, and greater access to employees who are involved in international communication. New trade alliances should include provisions for research to monitor and improve communication among companies involved in international trade in their regions. Governments also should fund research as part of their efforts to promote commerce and international investment. Studies of the effects of negotiation training and dispute resolution would also seem worth funding in order to
619
increase success in the ongoing expansion of trade that, from the beginning, has required international communication. BIBLIOGRAPHY 1. J. Kirkman, How Friendly Is Your Writing for Readers Around the World?, in E. Barrett (ed.), Text, Context, and Hypertext: Writing with and for the Computer, Cambridge, MA: MIT Press, 1988, pp. 343–364. 2. D. A. Peterson, Developing a simplified English vocabulary, Tech. Commun., 37: 130–133, 1990. 3. C. K. Ogden, Basic English, a General Introduction with Rules and Grammar, London: Trebor, 1932. 4. C. K. Ogden, The system of Basic English, New York: Harcourt Brace, 1934. 5. Association Europe´ene de Constructeurs de Materiel Aerospatial (AECMA), AECMA Simplified English Standard, Brussels, Belgium, AECMA Doc. PSC-85-6598, issue 1, 1995. 6. D. Crystal, English as a Global Language, Cambridge, UK: Cambridge Univ. Press, 1997. 7. M. Thomas et al., Learning to use simplified English: A preliminary study, Tech. Commun., 39: 69–73, 1992. 8. J. Spyridakis, H. Holmback, and S. Shuberte, Measuring the translatability of simplified English in procedural documents, IEEE Trans. Prof. Commun., 40: 4–12, 1997. 9. G. Hofstede, Culture’s Consequences: International Differences in Work-Related Values, Cross-Cultural Research and Methodology Series, vol. 5. Newbury Park, CA: Sage, 1984. 10. G. Hofstede, Cultures and Organizations; Software of the Mind, London: McGraw-Hill, 1991. 11. E. H. Weiss, Technical communication across cultures: Five philosophical questions, J. Bus. Tech. Commun., 12 (2): 253–269, 1998. 12. R. McCrum, W. Cran, and R. MacNeil, The Story of English, New York: Viking, 1986 (good for younger readers). 13. C. Leininger, Organizational Strategies in International Communication, J. Bus. Tech. Commun., 11 (3): 261–280, 1997. 14. North American Free Trade Association (NAFTA) Secretariat, sections on dispute resolution processes [Online], accessed May 26, 1998. Available www: http://www.nafta-sec-alena.org/ english/index.htm 15. International Air Cargo Association web site [Online], 1998. Available www: http://www.tiaca.org/ 16. T. D. Atkinson, Merriam Webster’s Guide to International Business Communications: How to Communicate Effectively Around the World by Mail, Fax, and Phone, Springfield, MA: Merriam-Webster, 1994. 17. T. Morrison, W. Conaway, and G. Borden, Kiss, Bow, or Shake Hands: How to Do Business in Sixty Countries, Holbrook, MA: Adams, 1994. 18. United States-Mexico Chamber of Commerce, Trade Statistics [Online], 1998. Available www: http://www.usmcoc.org/ eco2.html#a 19. D. Andrews, Information Systems and Technology in International Professional Communication, in C. Lovitt and D. Goswami (eds.), Exploring the Rhetoric of International Professional Communication: An Agenda for Teachers and Researchers, Amityville, NY: Baywood, 1998, in press. 20. P. Lloyd (ed.), Groupware in the 21st Century: Computer Supported Cooperative Working Toward the Millennium, Westport, CT: Praeger, 1994. 21. D. Victor, International Business Communication, New York: HarperCollins, 1992.
620
INTERNATIONAL TRADE
22. E. T. Hall, Beyond Culture, Garden City, NY: Anchor, 1976. 23. E. T. Hall and M. R. Hall, Understanding Cultural Differences: Germans, French, and Americans, Yarmouth, ME: Intercultural Press, 1990. 24. L. W. Hugenberg, R. M. LaCivita, and A. M. Lubanovic, International business and training: Preparing for the global economy, J. Bus. Commun., 33 (2): 205–222, 1996. 25. T. Holtgraves and J. Yang, Interpersonal underpinnings of request strategies: General principles and differences due to culture and gender, J. Pers. Soc. Psychol., 62: 246, 1992. 26. G. Ferraro, The Cultural Dimension of International Business, Englewood Cliffs, NJ: Prentice-Hall, 1990.
43. Y. Matsumoto, Politeness and conversational universals— Observations from Japanese, Multilingua, 8: 207–210, 1989. 44. H. Yli-Jokipii, Requests in Professional Discourse: A Cross-Cultural Study of British, American, and Finnish Business Writing, in Annales Academiae Scientiarum Fennicae, Dissertations Humanarum Litterarum 71, Helsinki: Suomalainen Tiedeakatemia, 1994. 45. F. Trompenaars, Riding the Waves of Cultures: Understanding Cultural Diversity in Business, London: Economist Books, 1993. 46. W. Goodykunst and Young Yun Kim, Readings on Communicating with Strangers: An Approach to Intercultural Communication, New York: McGraw-Hill, 1992.
27. E. Tebeaux and L. Driskill, Culture and the Shape of Rhetoric: Protocols of International Document Design, in C. Lovitt and D. Goswami (eds.), Exploring the Rhetoric of International Professional Communication: An Agenda for Teachers and Researchers, Amityville, NY: Baywood, 1998, in press.
47. R. Scollon and S. W. Scollon, Intercultural Communication: A Discourse Approach, Cambridge, MA: Blackwell, 1995.
28. Mitsubishi Corporation. Tatemae and Honne: Distinguishing Between Good Form and Real Intention in Japanese Business Culture, New York: Free Press; London: Collier Macmillan, 1988.
49. R. Lanham, The Electronic Word: Democracy, Technology, and the Arts, Chicago: Univ. Chicago Press, 1994.
29. R. Shuter and R. L. Wiseman, Communication in Multinational Organizations: Conceptual, Theoretical, and Practical Issues, in R. L. Wiseman and R. Shuter (eds.), Communicating in Multinational Organizations, vol. 18, International and Intercultural Communication Annual, Thousand Oaks, CA: Sage, 1994, pp. 3–11.
51. S. Doheny-Farina, The Wired Neighborhood, New Haven, CT: Yale Univ. Press, 1996.
30. M. O’Hara-Devereux and R. Johansen, Globalwork: Bridging Distance, Time, and Culture, San Francisco: Jossey-Bass, 1994.
53. M. Limaye and D. Victor, Cross-cultural business communication research: State of the art and hypotheses for the 1990s, J. Bus. Commun., 28: 277–299, 1991.
31. N. Hoft, International Technical Communication: How to Export Information About High Technology, New York: Wiley, 1995. 32. I. Varner and L. Beamer, Intercultural Communication in the Global Workplace, Chicago, IL: Irwin, 1995.
48. W. Ong, Orality and Literacy: The Technologizing of the Word, London: Methuen, 1982.
50. A. Pacey, The Culture of Technology, Cambridge, MA: MIT, 1989.
52. J. Siegal and T. McGuire, Social psychological aspects of computer mediated communication, Amer. Psychol., 39: 1123–1134, 1988. 9.
54. T. Weiss, ‘The gods must be crazy’: The challenge of the intercultural, J. Bus. Tech. Commun., 7: 196–217, 1993.
33. D. Andrews, Technical Communication in the Global Community, Upper Saddle River, NJ: Prentice-Hall, 1998.
55. R. Yuan, The yin/yang principle and the relevance of externalism and paralogic rhetoric to intercultural communication, J. Bus. Tech. Commun., 11 (3): 1997.
34. C. Boiarsky, The relationship between cultural and rhetorical conventions: Engaging in international communication, Tech. Commu. Q., 4 (3): 245–259, 1995.
56. L. Flower, The Construction of Negotiated Meaning: A Social Cognitive Theory of Writing, Carbondale, IL: Southern Illinois Univ. Press, 1994.
35. J. P. Bowman and A. S. Targowski, The layer-based, pragmatic model of the communication process, J. Bus. Commun., 25: 5– 24, 1988. 36. D. Bosley, Visual Elements in Cross-Cultural Technical Communication: Recognition and Comprehension as a Function of Cultural Conventions, in C. Lovitt and D. Goswami (eds.), Exploring the Rhetoric of International Professional Communication: An Agenda for Teachers and Researchers, Amityville, NY: Baywood, 1998, in press. 37. W. Horton, Illustrating Computer Documentation, New York: Wiley, 1991. 38. C. Kostelnick, Cultural adaptation and information design: Two contrasting views, IEEE Trans. Prof. Commun., 38: 182–195, 1995. 39. D. Andrews (ed.), International Dimensions of Technical Communication, Arlington, VA: Society for Technical Communication, 1996. 40. C. Lovitt and D. Goswami (eds.), Exploring the Rhetoric of International Professional Communication: An Agenda for Teachers and Researchers, Amityville, NY: Baywood, 1998, in press. 41. Institute of Electrical and Electronics Engineers (IEEE), Policy 7.8. Code of Ethics. [Online], 1998. Available http://www.ieee.org/ committee/ethics/coe.htm, 1998. 42. P. D. Schultz, The morally accountable corporation: A postmodern approach to organizational responsibility, J. Bus. Commun., 33 (2): 165–184, 1996.
LINDA P. DRISKILL Rice University
Abstract : International Communication : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience
● ● ● ●
My Profile Log In Athens Log In
●
HOME ●
ABOUT US ●
CONTACT US
Home / Engineering / Electrical and Electronics Engineering
●
HELP ●
Recommend to Your Librarian
International Communication
●
Save title to My Profile
●
Article Titles A–Z
Standard Article
●
Email this page
●
Topics
●
Print this page
Wiley Encyclopedia of Electrical and Electronics Engineering
Linda P. Driskill1 1Rice University, Houston, TX Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W7021 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (115K)
Abstract The sections in this article are Language Issues in International Communication Simplified Language Systems Translation Effects of National Laws on Communication Effects of Cultural Differences Interaction of National, Regional, Ethnic, and Corporate Factors Influence of Corporate Norms and Policies Corporate Policies About International Communication Where to Find Help The Demand for Increased International Communication Influences of Culture on Communication Practices Problems Related to International Communication How to Communicate with International Audiences Ethical Issues Theories of International Communication
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ING/49.%20Professional%20Communications/W5612.htm (1 of 2)15.06.2008 20:25:27
Browse this title
Search this title
●
Advanced Product Search
●
Search All Content
●
Acronym Finder
Abstract : International Communication : Wiley Encyclopedia of Electrical and Electronics Engineering : Wiley InterScience
Research Methods Issues for Future Study in International Communication
About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ING/49.%20Professional%20Communications/W5612.htm (2 of 2)15.06.2008 20:25:27
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5606.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Management of Documentation Projects Standard Article JoAnn T. Hackos1 1Comtech Services, Inc. Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5606.pub2 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (683K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Management of Documentation Projects Phase 1. Information Planning—Identifying The Users' Information Needs Phase 2. Content Specification—Presenting the Details of the Design Phase 3: Implementation—Turning Design into an Information Product Phase 4. Production—Ensuring Accurate, Complete Information for the User Phase 5. Evaluation—Reviewing the Project Successes and Failures About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5606.htm15.06.2008 20:25:53
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
MANAGEMENT OF DOCUMENTATION PROJECTS
PHASE 1. INFORMATION PLANNING—IDENTIFYING THE USERS’ INFORMATION NEEDS
Three key practices mark the successful management of information projects—planning, estimating, and tracking. The purpose of these practices is to ensure that the final information products meet the needs of the intended users, and that they are accurate, complete, accessible, and usable. AH management activities are thus be directed to achieving these goals. If information is not accurate, users will make mistakes in performing tasks and interpreting results, If information is not complete from the users’ perspective, they will not have the information they need to perform tasks or to solve problems. If information is not accessible, users will not be able to find the information they need quickly and easily. And, if information is not usable, that is, clear and understandable, users will not be able to achieve their goals.
The Information Planning phase marks the beginning of the information-development life cycle. The planning phase allows the information designers to gather data that assist them in answering four key questions:
MANAGEMENT OF DOCUMENTATION PROJECTS Unfortunately, many people who are responsible for preparing technical information believe that their task is finished once they have written down what they know. They focus on recording their knowledge of a subject rather than considering the needs of those who will read and use the information. We have all had the experience of trying to use a technical manual to assemble and use a product we have purchased, only to discover that the information provided is not what we need to know. We have read the results of someone’s research, only to discover that the methods are unclear and cannot be duplicated or that the conclusions do not appear to follow from the results. Information we need for our own work is simply not available. Professionally prepared technical information requires that we take into account from the first who will be using the information and what they will want to do with it. This article presents a five-phase process that will assist in conducting and managing an information-development project to ensure the delivery of user-focused information products at the end. Information planning is the first of the five phases of the information-development life cycle. The five phases to be discussed in detail are
Phase 1: Information planning Phase 2: Content specification and prototyping Phase 3: Implementation Phase 4: Testing and production Phase 5: Evaluation
This life cycle is discussed in detail in Ref. (1). The discussion focuses on explaining the key elements of the information development life cycle as it relates to the activities of the entire information-development team, in addition to the project manager.
Who are the potential users of the product or process and its documentation, including users of the interface, the online help, and others, such as those who train and support the users? What is their range of knowledge, skills, and experience in the subject matter covered by the documentation? What is their range of skills and experience in the tools used in the product? What goals do they want to achieve using the product and the documentation? How do they achieve those same goals today? What information styles will be most effective in helping the users to perform and learn? To answer these questions about the users, information developers consider it critical to interact directly with the information users. This interaction includes observing them in the environment in which they will use the information. This direct observation of users is especially critical when the information is intended to support users in performing specific tasks. For example, operating a machine, using software to operate equipment or perform tasks, maintaining equipment, and using information for design and development, are difficult to describe well without understanding how users approach the task. Because the goal of information planning is to develop information products that will meet user needs and enable them to achieve their knowledge and performance goals, the greater the understanding of the users and their requirements, the better the information plan is likely to be. For example, the information developers may discover that the needs of the assembly line personnel are best served by the development of job aids displayed in the workplace rather than lengthy and complex manuals. The information plan would establish the ease for producing job aids rather than manuals. The information developers might discover that their design engineering audience is best served by conceptual information about the relationship between the design tools and their design goals. Similarly, they will make their case for this information design in the Information Plan. The Information Plan serves as a proposal for the information strategy devised for the project. The strategy ought to be established using first-hand knowledge of users and their needs, not on unexamined assumptions about the users. Several years ago, a team at one of the major semiconductor manufacturers investigated the information needs of the design engineers who purchased microchips from the company. The designers were most likely to be electrical engineers, who often had advanced degrees in the field. The internal design engineers had always assumed that the information needs of their professional cus-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Management of DocumentationProjects
tomers were identical to their own information needs. However, after the team surveyed the customers and conducted site visits, they learned that the customers wanted different information. They wanted to know how the chip might be effectively used in their designs rather than information about how the chip had been designed in the first place. The investigation demonstrated that the existing information design for the manuals was not meeting the customers’ needs. To quote a customer interviewed for the study, “We need to know how to use the chip, not how to design it.” Creating the Information Plan At the end of the investigation, after the users’ information needs are well understood, the team of information developers prepares an Information Plan. The Information Plan summarizes the results of the investigation and presents the strategy for meeting user needs. This strategy outlines the role of all the layers of information delivery available. From the information designed into the interface, through context-sensitive help and more detailed conceptual, procedural, and instructional text and graphics, the information designers should explain how each piece of information Will meet the needs of a particular segment of the user population. Included in the strategy should also be discussions of training requirements and ongoing support of users with information provided by field engineers, telephone support, or through continually updated information accessed electronically through Web pages, bulletin boards, fax lines, and more. The Information Plan presents the architecture of the information solution for the users. In the next phase, Content Specification, the broad brush of the Information Plan strategy is translated into the details of the many types of media that will be designed. Figure 1 illustrates part of a typical information plan, including a summary of the proposed strategy and predicted costs. How Long Should Information Planning Take? In the standard five-phase information development life cycle, it is estimated that Phase 1 should take between 10 and 20% of the total information project time. For example, if you have estimated that the information development project should take 6 person-months or approximately 800 development hours, then the information planning phase should take about 80 h to complete. The calendar duration of the 80 h may be more man 2 weeks because of the logistics in scheduling site visits and other information-gathering activities. For projects that have many unknowns in terms of users and their goals and activities, a higher percentage of project time may need to be allocated to information planning. Estimating the Cost of the Information Project. The initial estimate of the information project is best made following the development of the Information Plan. If an estimate must be made before any information planning has occurred, it is best made in reference to previous similar projects. For example, if the last project took 750 h over 6 weeks to complete and was similar in scope to the cur-
rent project, and if the evaluation of the information delivered showed that it met user needs, then the next project is likely to take a similar amount of time. It would be wise, however, to conduct a preliminary analysis of project dependencies to ensure that they also are the same from the previous project to the next project. Conducting a dependencies analysis is explained below. To estimate the cost of a documentation project requires that
the scope of the proposed project has been determined the level of quality required to meet user needs has been determined
a history of previous similar projects has been compiled
the dependencies of the proposed project have been evaluated
the resources available to complete the project have been examined
the deadline and milestone schedule requirements of the project have been determined. Each of these estimating requirements is described in the following sections. Determining the Scope of the Information-Development Project The scope of an information-development project reflects the amount of information to be developed. The scope should be derived from the Information Plan and from previous experience providing information for similar users. Note that estimating the information scope is often difficult for those inexperienced in information design and development. it may be useful to compare the project being estimated with previous projects, competitors’ projects, or similar projects with which you may be familiar from other industries. For example, you may discover that a project similar in scope developed the previous year resulted in a 150-page manual, a 50-page set of error messages and solutions, and 250 online help topics. Then you might initially assume that the new project will have a similar scope. For a completely new project, you may need to make a rough estimate of tasks to be supported in the documentation and to multiply by a standard such as 5 or 10 pages for each task. It may be necessary to estimate the number of help topics to be included with a software package by calculating the number of help topics produced for a similar project and correlating these with the number of screen displays, dialog boxes, menu items, check boxes, choice buttons, and so on. The number of choices the user has may provide a rough correlation with the user tasks to be documented, as well as the number of context-sensitive help topics. Depending on the nature of the information media chosen, different units are employed as indicators of the scope of work. Table 1 lists units that are traditionally related to different types of information media. Experts in each media type typically use units of this sort as scope definitions for estimating purposes. Added to the scope definition is the level of quality and complexity
Management of DocumentationProjects
3
Table 1. Traditional Information Units for Purposes of Scope Estimates Information Media
Unit
Printed or electronic text Stand-alone topics Context-sensitive online help Graphics Video Classroom instruction Interactive multimedia Sound Quick reference
Pages (approx. 400–500 words) Topics (approx. 250 words) Topics (approx. 250 words) Image type Minutes of video Hours of instruction Events (page turn, popup, animation, voiceover unit, and so on) Seconds Impressions (print images)
to be achieved. For example, a simple graphic image may take only an hour or so to create, while an exploded view of a complex mechanism may require 100 hours or more. A typical estimate of scope for an informationdevelopment project might took like Table 2.
Determining the Quality Level Needed to Meet User Needs In most hardware and software development projects, the quality level is determined by the effort made to reduce the number of defects in the final product delivered to the customer. In a software project, for example, the product spec-
Figure 1. (Continued )
4
Management of DocumentationProjects
Figure 1. (Continued )
Management of DocumentationProjects
5
Figure 1. A representative example of an Information Plan. Table 2. Typical Scope Definition from an Information Plan Information Unit
Scope Definition
Users’ guide Online help Conceptual guide Training class Quick reference card
60 stand alone topics 200 help topics 45 pages 6 h of instruction 2 impressions
ification may require that no Level 1 defects remain in the product before it is shipped, defining a Level 1 defect as one that causes loss of data for the user. Following typical testing algorithms, sufficient testing is conducted to estimate with reasonable certainty that no Level 1 defects exist. The quality level of information products can also be set by the level of effort taken to ensure that no defects exist and that the information meets the usability requirements set by the developers and the users. In some organizations, for example, information-development standards require that all task-oriented documentation be tested with the
product to ensure that no errors exist in the instructions. Other organizations require that documents pass the criteria set for usability testing of the documentation with the customers. If the test subjects can perform the selected tasks within a specified amount of time and at an acceptable level of performance (no unrecoverable errors, a stated level of user satisfaction, and so on), then an acceptable level of quality has been achieved. The quality level set for an information-development project will determine how much effort to devote to ensuring that the resulting products are defect-free and usable. It will take more time to complete a project that requires a high level of quality with regard to user requirements and usability than to complete a project in which defects in the documentation go unchecked and uncorrected. You must determine if lower levels of quality are acceptable to your organization, team, and customers if you choose to reduce documentation effort below a standard of quality. Remember that lower levels of information quality may
6
Management of DocumentationProjects Table 3. History of Typical Information-Development Projects
Project Type
Hours to Complete*
Users’ guide of 150 topics Users’ guide of 234 topics Users’ guide of 567 topics Users’ guide of 56 topics
562 h (3.75 h/topic) 982 h (4.2 h/topic) 2336 h (4.12 h/topic) 319 fa (5.7 h/topic)
Table 4. A Typical Information-Development Team Team Members
Percentage of Project Time
Project manager Writer Editor Graphic artist or illustrator Production specialist
15% 50% 15% 15% 5%
result in greater customer dissatisfaction, higher training costs, higher support costs, and lower product sales. Organizations that tolerate lower quality standards for documentation often also tolerate lower quality standards for their products in general. One general manager of a development organization reported that his company’s quality standard was to “ship it because the customer will find the defects before we do.” Other organizations require high levels of quality in documentation going to customers or employees because they recognize the impact of poor quality on their organizations’ reputation. They also may be interested in reducing taining costs or reducing the cost of customer support by providing information that permits customers to learn and act independently. Compiling a History of Similar Projects The histories compiled of similar projects completed in your organization will provide a base line of data for evaluating the cost of a new project. For example, you may have determined that previous projects completed in the organization resemble those in Table 3. Hours to complete includes all development time (including planning), information gathering, writing, editing, capturing screen images, creating simple graphics, and managing the project. The hours do not typically include time for reviews by product developers and others not part of the information-development team. After histories of similar projects have been collected, you must determine if these projects represented the quality standard sought for the current project. The following questions must be considered. Have you surveyed user satisfaction with the information products? Are you aware of the number of support calls attributable to incorrect or unusable documentation? Are you aware of the time required by your customers to train end users of your product? Do you know if your customers are rewriting your documentation because it is inadequate to meet their needs? Are you providing more information than your customers find necessary? Review the section entitled “Phase 5: Evaluation” for ideas about studying the customers and evaluating documentation quality. Once you have obtained a project history and decided that the project achieved an appropriate standard of qual-
ity, the data collected will-help in estimating the next project. If you find that the quality level is inadequate, you may want to increase the metric to allow more time for quality assurance activities and quality improvement, including customer observations, usability testing, and functional testing. In Table 3, note that the smallest users’ guide also required the most time to create in terms of the unit metric (5.7 h per topic). Frequently, efforts made to minimize information and make it more accessible and usable increase the unit metric at least temporarily until the new standard is well understood by the development team. Also note that the average of the four projects is 4.17 h per page. In estimating the cost of a new project, you may want to begin your calculation with 4.17 h per page. If project histories are not available, you may choose to use industry averages as a starting point or a benchmark with other information-development organizations in the industry. Be certain, however, that you understand exactly what the managers are counting toward total development time and what level of quality they are working toward. One organization I studied produced high-quality minimalist manuals for a hardware product and averaged 9.75 h per page in doing so. Another organization average about 4 h per page for task-oriented software documentation. I have found that the development of usable context-sensitive online help typically requires about 2.5 h to 3.5 h per help topic, depending on the experience of the developers and other project dependencies. Classroom-based instructional design and course development is often quoted at between 20 h and 40 h per deliverable hours of instruction. Refer to others in the same industry who are known to track their projects conscientiously to help provide a starting point for project estimates. Evaluating the Dependencies of the Proposed Project Even if you have developed an adequate history of previous projects in your own organization, using the metrics of an individual project or even the metrics of an average project may not produce a good estimate of the cost of a new project. Not all projects are equal nor are all projects average. I recommend using the Dependencies Calculator illustrated in Fig. 2 to calculate the effects of several potentially significant factors on a particular project. The Dependencies Calculator weighs the effects of ten typical project dependencies, using the midpoint (a factor of 3) as 1.00 (the neutral point) and assigning each rating above 3 a factor 5% over the previous rating and each rating below 3 a factor 5% below the previous rating. The ten dependencies on the sample calculator in Fig. 2 represent the typical dependencies we have seen in documentation projects. Dependencies that affect your projects can be added or dependencies that do not apply may be omitted. Nine of the dependencies have increments of 5% above and below the center except for the first dependency, which has a 10% increment above and below the center. This dependency for project stability applies to the stability of the larger development project of which informa-
Management of DocumentationProjects
7
Figure 2. Dependencies calculator.
tion development is a part. For example, a project would be considered highly unstable if you anticipated many functional changes throughout the development life cycle. Such product changes will affect the number of information changes that will have to be made during the informationdevelopment life cycle. A stable product with few changes will take fewer hours and cost less to document. Consider the dependencies calculation for a specific project. Based on an analysis of previous projects with the same team and new information about the current situation, the project manager creates the following set of dependencies. On each five-point scale, 3 represents the average case in the organization, 4 and 5 represent worse than average scenarios, and 1 and 2 represent better than average scenarios, as illustrated in Fig. 3. On the basis of the calculation and with the starting point of 4.17 hour/page, the project manager calculates the hours per page for this project to be 4.55. Once the average hours per unit have been determined and the dependencies for a specific project have been calculated, multiply the hours per unit by the number of units to calculate the hours required to complete the project of the scope and quality specified. For example, a project manager knows that previous online help projects have averaged 3.1 h per help topic. The dependencies calculation for the new project results in 3.5 h per help topic. The manager estimates they will need to write 250 help topics based on a comparison with previous projects of similar scope. Therefore, the total hours required to complete the projects are 250 times 3.5 or 875 hours. At an average charge per fully burdened hour of employee time of $65/h, the manager then calculates that developing the online help will cost nearly $57,000 of development time.
Investigating the Resources Available to Conduct the Project The primary resources required for an informationdevelopment project are people—writers, editors, graphic artists, production specialists, project managers, and other
specialists depending on the nature of the media selected. The information project manager must evaluate the team resources needed for the project as well as the resources available. In the dependencies calculation, the experience of the team members has already been accounted for in part. If team members change, the dependencies for the project may also change. Table 4 illustrates a typical team and the percentages of time required per team member for an informationdevelopment project. If you decide to assign the project to a single individual, consider carefully if the decision will be cost effective. Can the individual do his or her own quality assurance? We often find that writers are not the best editors of their own work. Can the individual do the graphics needed for the project? The project may require photographs of equipment. Does the information developer have adequate skills to take the photographs, taking into account lighting, correct exposures, editing, and digitizing? Is final production better done by a production specialist or by the individual? I argue that using a highly skilled communicator to do production tasks is not cost effective. As information projects become increasingly complex, with many media to choose from, assuming that a single individual can do everything effectively is a mistake. In addition to evaluating the skills needed for the project, consider the availability of people for the project. Are all the experienced staff members already dedicated to several other projects? What hours are allocated to those projects? Are they available to work full-time on the new project, or only part time? If the cost of the rest of the projects has not been estimated, it is likely that team members have been over-assigned too many projects. Over-assignment usually means that quality is compromised to meet deadlines. Under-assignment of time does not guarantee quality either. People tend to fill the available time with work that may not advance the quality of the product. In fact, underassignment to may result in lower quality if too much un-
8
Management of DocumentationProjects
Figure 3. Sample dependencies calculation for a particular project. Table 5. Milestone Definitions Phase
Name
Percentage of Total Project Time
Phase 1 Phase 2 Phase 3 Phase 4 Phase 5
Information planning Content specification Implementation Testing and production Evaluation
10 20–25 approximately 50, depending on Phase 4 requirements 18 or less 2
necessary information is provided. Unnecessary information clutters the document, making it more difficult for users to End the information they actually need. Manuals that are too long are often not used because they are intimidating to users. Online information that is yoluminous often means online searches that result in hundreds of topics selected, making the critical information difficult to separate from the “nice to know” or the completely irrelevant. The key to maintaining quality at the specified level is to staff projects correctly, based on the estimates you have made. You need to ensure that sufficient staff are assigned from the beginning to take into account information changes that are likely to occur, but you must guard against using more staff than is necessary.
Determining the Project Milestones and Deadlines Project milestones and deadlines for informationdevelopment projects are usually established in response to external factors such as product launch dates and customer requirements. However, to some extent, milestones should be viewed as dependent on your ability to conduct the project phases in a manner that produces a high-quality result. Based on experience with hundreds of projects, I suggest using the following, illustrated in Table 5, to schedule internal milestones. To estimate Phase 4 requires taking into account the production requirements of the project. For example, a report that is simply reproduced on a copier machine will have an almost negligible Phase 4. But a manual of several hundred pages that will be offset printed and bound
may take two or three weeks to complete. In evaluating the Phase 4 milestone, discuss the production techniques with people expert in the media that will be used. Remember that even information delivered electronically takes time to prepare and debug. In addition, note mat XML-based (eXtensible Markup Language) publishing, which is widely supported, eliminates much of the traditional print preparation work. XML-authored text is format free, using XML elements to markup text without the addition of format information. The format is added through a publishing process that applies style sheets to the unformatted text. The style sheets are designed to support the publishing of Portable Document Format (PDF) outputs, HTML, various help systems, or other output types. As a result of the automation, the percentage of time required for Phase 4 production work may be greatly reduced. However, the testing and translation activities that are an integral part of Phase 4 still require budgeted project time.
Developing a Project Spreadsheet One of the best and simplest ways of representing your initial view of an information-development project is a spreadsheet. The spreadsheet allows you to staff the project appropriately to meet the interim milestones and the deadline while maintaining the quality level demanded by the users. Figure 4 illustrates a typical project spreadsheet for a project that includes a user manual and an online help system. Each column in the spreadsheet represents a month or a week of the project. Each row represents the hours al-
Management of DocumentationProjects
9
Figure 4. Sample project spreadsheet.
located to each person working on the project. The total hours per person add up to the total hours required to complete the project as defined in the Information Plan. The spreadsheet can also be used to define the due dates of project milestones, for example, to calculate when Phase 2 is expected to be completed. If Phase 2 represents approximately 30% of the total project time, then the spreadsheet can be used to calculate when 30% of the total hours have been expended. In addition to using the project spreadsheet to estimate and schedule the project, you can use it to track project progress, measuring hours expended against the percent complete of the project.
PHASE 2. CONTENT SPECIFICATION—PRESENTING THE DETAILS OF THE DESIGN In Phase 2 of the information-development process, information developers move from the general strategy for meeting user needs sketched in the Information Plan to detailed design plans for the media they intend to produce. The Content Specifications should demonstrate how the results of the user and task analyses will be played out in the design of technical information of all types, including context-sensitive help systems, a series of paper and electronic manuals, online or self-paced tutorials, computerbased training, and even classroom training and other online mechanisms for ongoing user support.
Some information developers may argue that detailed specifications for the design of information deliverables are premature, especially when aspects of the product functionality, the user interface, or the information content may not have yet been determined. This view reflects a mistaken notion of the concept of user- and task-oriented design. In a task-oriented information design, the users’ need for information and the tasks the users need to perform should be the focus of the design, not the information content or the tasks performed by the product. Too often, information developers performed by the product. Too often, information developers create superficial user- or taskoriented instruction that simply states information out of context or describes the product functions, rather than providing information that reflects the users’ goals and objectives. If the users’ goals and objectives have been analyzed in Phase 1, then the results of the analysis should inform the detailed design process of Phase 2. An information design that is based upon user tasks is unlikely to change in a substantial way during the course of the project, while an information design based upon subject matter, system tasks, or software design is likely to change whenever the subjects change or the underlying functionality is rethought. A detailed content specification of the elements of the information design has several significant advantages over a vague plan in the head of the individual writer:
10
Management of DocumentationProjects
Example of an Annotated Topic Outline. Section A. Setting up the oscilloscope In this topic, engineers and scientists will learn the setup steps that they must perform so that the instrument can be used. The setup steps will include options that the user can exercise and indicate how those options will affect the types of measurements that can be taken.
Information developers are likely to consider the de-
sign implications of their user and task analyses more thoroughly if they are required to write a detailed specification rather than a simple heading-level outline. Reviewers are more likely to understand the intent of the design and its relationship to user needs from a detailed specification than they are from a vague, high-level out line. Implementation of the design should not begin until the overall approach to the information is thought through. Remember that many information developers are likely to “just start writing without a Well organized plan in place.” In case of personnel changes, a detailed plan enables a new information developer more easily and quickly to pick up where the former developer left off. Information managers can estimate resources required and plan a detailed schedule of milestone deliverables more effectively around a detailed plan than a vague outline.
A detailed Content Specification should include the following:
A description of the purpose of each information media that will be delivered
Measurable usability objectives for each deliverable Brief summaries of user characteristics and tasks A discussion of the design rationale for each deliverable
A detailed annotated outline of each topic to be designed, including the associated information type and whether the topic is new or requires-minor, major, or no modification. The Content Specifications for documents, manuals, and some online help designs include a table of contents for the information with annotations, as shown in the following example. A table of contents-like outline may be the best way to display the organization of information in a book or a traditional help application. However, other organizational structures may be more effective for planning the details of Web sites, help systems, tutorials, computer-assisted training, and others that have a hypertext rather than a linear structure. These structures are more effectively specified through hierarchical or Web-like models (hierarchy charts, Web maps) with which the designer can show
the relationships among the modules more easily than in a linear outline. A Web map can be used to show the relationship between a topic and the interface objects, facilitating the development of context-sensitive links. Each interface object in the Web becomes a starting point for access into the help system. Each help topic is shown as an object that can be linked to and from or accessed through a browse sequence or through a table of contents, keyword search, or full-text search. A Web map makes all the relationships clear, although it can become quite complex if there are several hundred or thousands of help topics or if the links are random rather than systematic. Systematic links help the users to predict the kind of information they will receive when they pursue a piarticular hyperlinked path. Random links often confuse the users, leading them to abandon references to the help system.
Prototyping—A Phase 2 Opportunity for Rapid Information Design During Content Specification, the information developers should begin producing prototypes of their design ideas. Design prototypes serve the same function in information design as they do in product design. They permit more effective feedback by other members of the design team and by potential users. Prototypes of manuals, help systems, computer-based training, and others can be reviewed using cognitive walkthrough or heuristic evaluation techniques, or they can be subjected to usability assessments. For example, if an information developer is contemplating a new structure for a Web-based information library, it is possible to test the structure simply by prototyping the sequence of hypertext iinks. Users might be asked to find information and their responses recorded to help decide if the design is usable before the information is finished. Early prototypes of web-based content might also be created. They can accompany early product prototypes to provide supporting information to assist learners who bring fewer resources to the performance and learning problem. In one early test of interface and documentation, we not only learned that the interface created a flawed conceptual model of the product functions in the users’ mind but that the prototype documentation did not help the users correct their misconceptions. Both product and documentation needed to be redesigned. Prototyping also gives the information developers an opportunity to show concretely how the information will look to the users. The prototype designs should be fully formatted according to the requirements of me Information Plan and the Content Specifications so that the prototypes approximate the final look and feel of the design. In this way, users and team members can judge if the information designs are both usable and attractive. For most people unskilled in Web or book design or the layout of web topics, an abstract description of the intended design fails to communicate effectively the full intent of the design. Only with a concrete representation is the reviewer able to construct a comprehensive mental model of the design and provide useful feedback as to its effectiveness.
Management of DocumentationProjects
Combining Phase 1 and Phase 2 for Revision Projects For most information-design projects, it is important to keep Phases 1 and 2 of the information development life cycle separate. Phase I, Information Planning, encourages designers to look at the broad issues involved with satisfying user needs. In the Information Planning phase, media are selected, strategies developed, and usability goals established in keeping with the information learned from the users and other subject-matter experts. In Phase 2, Content Specification, the broad outline of goals and objectives is translated into specific information deliverables, each of which is itself carefully specified. Without both phases in filace, information developers are more likely to recreate the status quo, producing the same dull, unusable information year after year without regard to performance issues or changes in the make-up of the user community. On some projects, however, the information planning may have already been done for a previous version of the product. As a consequence, only Phase 2 specifications may need to be written to add new functionality to an existing structure or to make minor organizational adjustments based on feedback from users, trainers, and customer support. However, care should be taken to avoid endlessly maintaining an existing structure after it has lost its effectiveness. Too often, both product and information developers continue to make changes to an existing structure without ever examining its effectiveness or are afraid to make drastic changes to a structure that they have learned is ineffective. For that reason, it is useful to reconsider Phase 1, information Planning, on a regular and frequent schedule throughout the life of a product. PHASE 3: IMPLEMENTATION—TURNING DESIGN INTO AN INFORMATION PRODUCT Some aspects of implementation of the information design begin during the prototyping activities of Phase 2. However, primary implementation work in Phase 3 should not begin in earnest until Phase 2 detailed planning, prototyping, and early usability studies are complete. We recommend that at least 30% of total project hours be devoted to Phases 1 and 2. That is, if a project is projected to take 1200 h, or two people working full-time for six months, then Phases 1 and 2 should consume at least 400 of the 1200 h, In that time, an information-design strategy will be complete and detailed. During Phase 2, information developers are likely to begin assembling some of the technical content needed for the information deliverables. However, most of that content development should take place in Phase 3, Implementation. Phase 3 will take from 50% to nearly 60% of the total project time, depending upon the amount of time needed for testing and production during Phase 4. The percentage of time required for Phase 4, Production, will increase depending on such factors as printing, translation, packaging, and distribution. During the Implementation Phase, information developers begin to produce the mformation types outlined during planning. Typically, information deliverables go through at least two formal review cycles, following first
11
draft (alpha) and second draft (beta) development. Often, information deliverables go through at least one more informal review cycle early in Phase 3. In this review, small sections of a document or help system are circulated among informed people for feedback on content, style, layout, and so on. During Phase 3, the information developers are learning more about how to present the content and how to relate the content to the users’ goals. Writers, instructional designers, illustrators, graphic designers, layout specialists, online specialists, video producers, animators, and other individuals representing the wide variety of media we can include will be involved in contributing their expertise to me emerging information. As the information is created, it should be reviewed be fore every scheduled phase by a developmental editor, an individual skilled in heuristic evaluation techniques and alert for potential problems in organization, level of detail, completeness, tone, format, and more. The developmental editor should be a senior member of the informationdevelopment team with considerable experience working with information developers to assist them in ensuring that the project goals are being met. The developmental editor often assumes a teaching role, especially when some members of the team are inexperienced in the information design techniques used by the organization. The editor fulfills a significant quality assurance role by ensuring that goals and standards are met and that best practices are consistently followed. For detailed information on developmental editing, see Ref. (2). In addition to developmental editing, the informationdevelopment team may include individuals expert in copyediting. The copyeditor ensures that the text, graphics, and layout conform to company and industry standards. Copywriters generally check documents for spelling, grammar, consistency, adherence to regulations, and more. Early copyediting ensures that information does not contain errors that distract reviewers from their primary task of ensuring the accuracy of the information. Also available are automated quality management tools which check a text against style guides, terminology lists, and other internal or international standards and provide a report to the writer and to the management, if desired. Technical reviews are a standard part of the Implementation Phase but are frequently unsuccessful in helping to ensure the quality of the information delivered. In fact, many information developers consider the current technical-review process to be broken. The primary reason for review problems is a lack of commitment to the review process by the reviewers themselves. A thorough technical review of technical in formation takes time, on the average 5 to 10 min a page. It also takes careful attention to detail to ensure that the conceptual, procedural, and instructional text includes the correct information. It takes even more careful attention to ensure that no information that might help the users learn and perform has been omitted. Too often, those with review responsibilities do not allow sufficient time in their schedules for thorough reviews. As a consequence, incorrect information often finds its way into final information products.
12
Management of DocumentationProjects
Usability Testing of Documentation and Help As soon as documentation and help are prototyped, usability testing can begin with actual users. Although the most complete assessment of the usability of the documentation will take place once draft software and the draft information are complete, early tests can take place with early prototypes of interface and information. We might want to learn, for example, if a minimalist approach we have selected for procedural information is sufficient to assist novice users in performing new tasks. With basic procedures in place and an early view of the interface, including paper prototypes, we can ask potential users to perform tasks following the instructions. More extensive usability testing is performed at the first, or alpha, draft phase. Generally, we divided Phase 3, Implementation, into three sub-phases: first draft, second draft, and production draft. We define each draft by the percentage complete of the information. For example, we might expect the first draft of the information to be 90% complete, with draft graphics in place, a complete table of contents, and a rudimentary index. The second draft may be 99% complete in terms of text and graphics with nothing left to add except the final corrections and a complete index. At the second draft, sections of the documents are often released for translation. The final production draft is ready for shipment to printing or implementation on CDROM or as part of a Web site. Usability assessment becomes a most serious activity at the first draft sub-phase. At this point, you can ask potential users to perform complete tasks with a product and provide the online help and paper documentation to answer their questions. During usability assessments at this point, however, a significant decision must be made. You can ask users to perform tasks with the product and simply make the technical information available without mention, you can remind users that help and paper documentation is available for their use, or you can constrain the task and ask users explicitly to use the help and the paper documentation to complete the assigned tasks. No technique is any more valid than any other, and only the last will provide an explicit test of the documentation. It is entirely possible that the first two choices will give little or no information about the effectiveness of the documentation. It is a good idea to pursue all three techniques. The first technique, not referring to the documentation explicitly, might be used before any help or paper documentation exists. The second technique will suggest at what point the users turn to the documentation for assistance. The third technique might indicate if the technical information adequately supports learing, especially for novice users.
Tracking the Project Throughout the course of the project, but especially during Phase 3, you will need to track progress. Tracking progress includes knowing how much time has been allocated to each task, how much time has been used to date, how much time is remaining, and how much work is left to complete the task.
To gather the information, you must know if the project is on track. Each member of the information-development team should be asked to track his or her own progress. They need to know their allocated hours for each task and report how many hours they have expended and how many are remaining. It is best to ask everyone to report their hours weekly, before they lose track. The most difficult aspect of project tracking is, however, not the hours allocated but the progress of each task toward completion. People are often quite optimistic in reporting how much they have done and how much they have left to do; if they are performing a task they have never done before, they will have little sense of what remains to be done for completion. The project manager will need to assist flrem in evaluating their progress. The project manager must also be alert for changes to the original scope defined for the project in the information Plan and the Content Specification. If a help system with 150 topics has been specified, remaining on track will be difficult if team members add topics, They need to be aware of the original estimates and how they were made so that they track carefully any changes to project scope. No one should feel free to increase the project scope without approval of the project manager; the project manager must be prepared to estimate the affect of changes on the team’s ability to complete the project on time and maintain the level of quality required by the user. Reporting Project Progress As the project is tracked, be prepared to report progress to team members and management. Progress reporting should include both oral reports and periodic written reports. In the written reports, include a summary of the progress and plans for the next period. Review the hours used and the hours remaining and estimate the overall progress toward completion. Finally, discuss any problems that have occurred or are anticipated. By dealing with problems immediately, you are more likely to solve them while they are still small. By anticipating problems, you will be able to deal with them effectively and quickly. Always keep the primary goal of the project in mind—to meet the users’ information needs. PHASE 4. PRODUCTION—ENSURING ACCURATE, COMPLETE INFORMATION FOR THE USER During the 1990s and into the first decades of this century, traditional print production has been largely replaced by electronic methods, notably the Portable Document Format or PDF. Organizations prepare PDF versions of their documents that may be printed by a formal print vendor or an inhouse print shop, made available through on-demand printing services, or published to a website for customer downloading. If formal printing is required, it should be noted that, even when advanced electronic production techniques are used, print production takes time that cannot be truncated. Large print runs of multicolor documents, including the binding and preparation for distribution, may take several weeks to complete. During this production period, no further changes can be made to the information
Management of DocumentationProjects
without incurring considerable expense. As a result, it is very important that ao product or content changes occur. If they do, it is likely that users will be disappointed to learn that something they believe they should be able to do is not possible, or something is possible for which they have no information. Even if information is distributed electronically, the final electronic files must be prepared. The preparation includes adding hypertext links, creating indexes to facilitate keyword searches, and testing functionality. Contextsensitive help systems, while requiring the same testing before distribution to users, also must have the links between software and help tested to ensure that the correct information appears. XML-based authoring further automates the publishing process by adding one or more style sheets to a format-free content. Much XML-based publishing is very fast, producing output as PDF, HTML, various help systems, or other output. The implementation of XML-based publishing has reduced the time required for final documentation preparation from weeks to hours or minutes. Functional testing of electronic information, while it can begin in Phase 3, should become part of the functional testing of the product to ensure accuracy and completeness. Web-based information also requires functional testing to ensure that internal and external links operate as intended. In addition to functional testing, Phase 4 often includes the most intense period of activity for translation and localization. A lack of discipline in the processes for developing product and information will have a considerable adverse affect on the cost and effectiveness of translation and localization efforts. To facilitate translation and localization, all product and information developers should work with standardized vocabularies, consistent structures for information, and a minimalist approach that reduces the text to what the user needs to know, If the developmental and copyediting functions described earlier have been applied to all the information deliverables, including the product interface, translation and localization will proceed more smoothly and result in more accurate information delivery to a range of user communities worldwide. Once again, the need for a careful and consistent approach to all the information that touches the user will reap benefits for both users and developers.
PHASE 5. EVALUATION—REVIEWING THE PROJECT SUCCESSES AND FAILURES Since no one likes to review a string of failures, let us hope that following some of the steps outlined in this article will result in a successful project, one in which all information delivered to users its carefully planned and well integrated. Even if the project has been successful, there are always many opportunities to improve. Few projects are completed without communication Challenges, especially when a diverse team of information specialists finds itself working together for the first time. Whenever a project team consists of people with different experience, training, perspectives, and personalities, there will always be oppor-
13
tunities for improvement. In Phase 5, two sorts of evaluation are recommended— written and oral. Each member of the team may want to write a project wrap-up for the part of the project under his or her responsibility. The wrap-up reports might then be combined by the project manager into a single report. The wrap-up re port should include quantitative information about project activities, including hours used to complete each phase in comparison to the original hours predicted to be used in Phase 1. If the total hours were different than originally predicted, the project manager should account in the narrative for the differences, especially if the project lias taken considerably longer than first estimated. Such an analysis will help the project team to better estimate its time on future projects. In addition to the wrap-up report, the informationdevelopment team, including the product developers, should meet to review the report and discuss ways in which they might be able to work together more effectively in the future. It is better to discuss problems soon after they have occurred rather than allow bad feelings and resentment to linger into the next projects. Information design and development promises to become an integral part of product development. Even so, the processes need to be improved so that no member of the team is made to feel like a second-class participant. To develop information products that meet the complex needs of a wide variety of users throughout the life of a product, product and information developers must work together effectively. As the responsibility of our organizations for meeting user needs increases, so must our teamwork and our ability to listen and to respect the perspectives of development professionals from diverse disciplines. The development professionals must also learn that teams require collaboration not competition. Through a sound information-development process, such collaboration will be enhanced. BIBLIOGRAPHY 1. J. T. Hackos, Managing Your Documentation Projects, New York: Wiley, 1994. 2. J. T. Hackos, Information Development: Managing your Documentation Projects, Portfolio, and People: Wiley, 2006. 3. J. Tarutz, Technical Editing: The Practical Guide for Editors and Writers, Reading, MA: Addison-Wesley, 1992.
JOANN T. HACKOS Comtech Services, Inc.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5615.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Oral Presentations Standard Article Arthur G. Elser1 1US West, Inc., Denver, CO Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5615 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (81K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Engineering the Presentation Delivering the Presentation (Production) About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5615.htm15.06.2008 20:26:15
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
ORAL PRESENTATIONS
395
3. What information does the audience need from the presentation? An engineer who has worked for 6 months developing a prototype or technology has enough information to talk for days on it. But the audience only needs to hear the right information, presented in language it can understand, in order to achieve the purpose of the presentation. Again, most failures of oral presentations come from the failure of the presenter to consider the answers to these three simple questions. ENGINEERING THE PRESENTATION
ORAL PRESENTATIONS Engineers must make oral presentations as a natural function of their work, whether or not they want or like it. This happens because they work with information that others need to make decisions about implementing new technology, going forward with a product design, or choosing between competing technologies or designs. The persons most involved with the information, the engineer, has the details others need to make those decisions, so they are called upon to stand up in front of others to present that information. Conferences, such as the many sponsored by the IEEE and other professional engineering and scientific organizations, provide opportunities for engineers to showcase new technologies, research, processes, and tools to their colleagues. The most common method of offering this knowledge is through a paper that is published in the proceedings of the conference and a talk, based on that paper, presented to a session at the conference. Those attending the session are scientists, practicing engineers, managers, and educators who have a vital interest in hearing the new information. If the presentation is vital and informative, those who hear it will often then read the paper for a more thorough treatment of the material and pass on their knowledge to their peers and organizations. Careers are often made during these presentations, in both the presenters’ organizations and their profession. Managers can make good decisions about how to proceed with technologies and projects, and others in the profession can use the information to make major technical advances. Conversely, poor decisions are often made because the engineer who had the knowledge failed to present it in a fashion that highlighted the critical information. What might appear to the presenter as exciting technical or scientific data are often seen as excruciatingly boring details by a manager or executive who has to make an important decision. Failure to concentrate on three fundamental questions that are key to every piece of communication causes many people who give a presentation to be less than effective. The three questions are as follows: 1. What is the purpose of the presentation? 2. Who is the audience for the presentation?
Fortunately, once the answers to the questions of purpose, audience, and audience needs are known, the process of designing and developing a good presentation is analogous to that of designing and developing a new product. It is a process that most engineers use every day in their work environments, one which can help take the mystery and fear out of developing and delivering good oral presentations. By concentrating on a few critical planning details when first setting out to design an oral presentation, engineers can help ensure that they develop and present the right material in the right way to influence the thinking of others. Creating a Specification for the Presentation A product specification tells how a product must perform, what requirements it must satisfy, and how it will satisfy those requirements. The specification for a presentation provides answers to similar questions by focusing on the three critical questions listed above. Knowing the answers to these questions, the engineer can narrow the range of information needed and select the proper language and graphics necessary to communicate effectively with the audience. Determine the Purpose of the Presentation. Often someone else has already determined the purpose. A manager or person in a position of authority has asked for the presentation to solve a certain problem. Some of the requirements might be: Do some research on technology G and make a presentation to my staff about how we might use it for the ABC product line. Karen Wright, the vice president of engineering, will be here next week, and I want you to spend 10 min giving her a progress report on your project. I’ll let you go to that IEEE conference, if you put together a 30 min talk on the three most exciting new technologies you learn about. Other times, an engineering team might decide to ask upper management for permission to adopt a new technology, buy new equipment, go ahead with a new process, or develop a new timeline for the development of a project. Each of these will probably require not only a written report covering the details of the proposal, but also a briefing for the senior staff so they can see the details quickly, ask pertinent questions, and make a decision. Each presentation has a purpose, and the engineer must decide on that purpose before starting to develop the presen-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
396
ORAL PRESENTATIONS
tation. Most presentations fall into one of three major categories: 1. To provide specific information to the audience about which the speaker has special knowledge 2. To persuade the audience to act in a way the speaker wishes 3. To tell the audience how to perform a function or process Given the same starting information, the speaker must select specific details and present them in a way to achieve each purpose. The types of presentations normally found in engineering environments include the following: Presenting technical information Requesting approval for a project or part of a project Reporting progress at regular intervals on a project or program Reporting trouble on a project so important decisions about how to proceed can be made Reporting at the completion of a project what went right, what went wrong, and suggested improvements for future projects Giving an impromptu talk on some aspect of a project or technology in which the speaker is expert Each of these presentations has a specific purpose that the engineer must have firmly in mind when developing and giving the presentation. Even the impromptu talk in which the speaker has almost no time to prepare has a specific purpose. The questions asked, the topic mentioned by the person asking the speaker to give the talk, the context of the discussion that sparked the impromptu talk, all provide guidance as to its purpose. Determine the Audience for the Presentation. The audience for a presentation determines what detail is selected. If, for example, the primary audience is a high-level manager with little background in the primary technology to be discussed, then the engineer must provide background information at the beginning of the talk and perhaps present it in a simple graphic way. A presentation like this might be one to the vice president for finance who must make a decision about providing additional funding for a project. In this case, the vice president might be more interested in the resource allocations and return on investment (ROI) for the project than much of the technical detail. But technical detail is necessary for a better understanding of the project upon which to make a sound decision. If the audience is the staff for the project and the staff members are very familiar with the technology, then technical details can be discussed in great depth. In this case, charts and graphs showing performance characteristics of components selected, decay of power ratings with increased temperature, or EFI emissions at ranges of operating frequencies might be appropriate. What might appear as jargon to the vice president for finance will be technical shorthand in this presentation, a way of using a common language to get the point across quickly. The engineer can assume a common ter-
minology and technical background of the audience and prepare the presentation with that in mind. Presentations often have mixed audiences and require special audience identification. If the vice president for finance and the project staff, along with other senior engineering managers are to be the audience, the purpose of the presentation will help decide on the content, language, and graphic elements of the presentation. But the engineer must decide or must find out from someone in authority who is the primary audience. If it is the vice president for finance who must make a decision, then the speaker should tailor the presentation to the vice president. If, the presentation is a technical review in which the senior engineering staff will make decisions about component selection or changes to the product specification, then it should be tailored to the language and knowledge of the engineering staff. The primary audience should be the focus of the presentation. Others attending will have to do the best they can with the information presented. Short asides for those attending who don’t have the technical background to help them understand the material are appropriate, and most engineering staff members won’t mind. Senior engineering managers often get a bit out of touch with the leading edge of technology, and the asides to the vice president of finance may help keep others from being embarrassed by asking a ‘‘dumb’’ question. Determine What the Audience Needs from the Presentation. Knowing the purpose and audience helps the engineer in determining what to put in and what to leave out of a presentation. The third ingredient in determining what goes into the presentation depends upon what the audience needs from the presentation. If the vice president for finance is the primary audience and must make a decision about the future of the project, the presentation must provide information the vice president needs to make that decision. Technical details probably won’t help. But the financial details of the project might be very important, answers to questions like how many R&D dollars are to be spent for development, how much do the manufacturing materials of the product cost, what is the ROI, and when will the product reach the break even point? The chances are that the vice president will ask these questions if they are not answered, and having to ask them might create a poor impression of the speaker and the presentation. A presentation to a senior engineering manager might review the technical specifications of the product under development, discuss the technical risks of the project, highlight contingency plans in case some technical obstacle can’t be overcome, show how long the project will take, and detail what and when engineering resources outside of the project team will be needed. This presentation might be on the same project as the one described above for the vice president for finance, but will provide much different information because the needs of the audience are different. Some material will appear in both presentations, but in different terms and level of detail because of the difference in the backgrounds of the two audiences. To review, the three questions that help the speaker determine the specifications for the presentation and focus it are as follows: 1. What is the purpose of the presentation? 2. Who is the audience for the presentation?
ORAL PRESENTATIONS
3. What information does the audience need from the presentation? Once the engineer knows the answers to these three questions, he or she can start on the design of the presentation. It is a good idea for engineers, when first learning to give oral presentations, to write down the answers to these questions and refer to them frequently during the preparation of the talk. Designing the Presentation Now that the engineer knows what the specifications are, it is time to start designing the presentation. Just as an engineering team first establishes the design for a product before developing it, so to the engineer must create a design for the presentation before developing it. This phase is analogous to doing a top-down document for an engineering project, starting with a block diagram or flow diagram of the major functions. The design document for a presentation is a completed outline that fulfills the design specifications developed earlier. Gather the Information Needed to Prepare the Presentation. The first step in any project is gathering the material needed to design the presentation. Memos and email from team members or management, notes made during meetings or phone calls, product specifications, and lab notebooks all provide information needed for the presentation. Where the engineer is uncertain of information, phone calls or email to the right persons can fill in the details. Reviewing the available information will refresh the presenter’s memory and help form a solid basis for designing the presentation. Using the specification developed above, the presenter can easily identify material that must be included in the presentation and what should be left out. Another aid at this time is to make notes that condense information and gather it in one place. Creating a text file of the information that can be manipulated by cutting and pasting can help get material organized more quickly and make it more accessible. Make a Working Outline. An outline is nothing more than the skeleton of the presentation. It is brief enough to view in its entirety to see how the pieces fit and make sure all the pieces are in place. The outline doesn’t have to be formal, the way most composition teachers demand, just something written down to provide the backbone for developing the presentation. [Note: using the outlining feature of most word processing applications provides an easy way to get the outline done.] These outliners make it easy to include new information, restructure the outline, and add notes to which the presenter can refer. The outlines produced can also be collapsed to give the writer a better idea of how the major pieces fit and whether each level provides the same level of detail. These outlines are accepted by most computer presentation applications, like Microsoft PowerPoint and Adobe Persuasion, making the process of creating overheads or a desktop presentation much more automatic. A presentation should have a beginning, middle, and end. Although this sounds obvious, many presentations fail because this rule is forgotten. It is best illustrated by the old military adage: ‘‘Tell ’em what you’re going to tell ’em, tell ’em, and tell ’em what you told ’em.’’ To decide ‘‘what you’re
397
going to tell ’em,’’ the engineer should again review the purpose, audience, and audience needs developed in the first part of this process. Designing the beginning is easy. It should be a statement of the purpose of the presentation. This can be the first point on the outline and can come directly from the statement of purpose written during the specification phase. If the purpose is to inform senior management of the benefits of using a new technology in future products and approve a project to bring that technology in house, the purpose should state that. It might be worded something like this: The purpose of this presentation is: To help you understand BDX technology To convince you that we should use BDX technology in future products because of enhanced performance, lower cost, and lower technical risk To get your approval for a plan to bring BDX technology in house This statement lets the audience know immediately why they are hearing the presentation and how they must use the information they hear and see. The middle of the outline consists of those major points the speaker must present to the audience to achieve the purpose. For example, the purpose statement in the previous paragraph already lists the major divisions: What BDX technology is How it can enhance performance How it can lower cost Why it offers lower technical risks What the plan is for bringing BDX technology in house These are the five major headings of this presentation. Once the major ideas are in focus, each can be fleshed out into minor points which are critical to developing the major ideas. In a complex presentation, the minor points might be broken down further into subpoints. It is important to note, however, that all the major ideas should be in place before trying to break them into minor points. Trying to provide minor points before all the major ideas are in place often leads to poorly organized presentations. The end of the presentation is merely a restatement of the beginning of the presentation. In this case it might be something like: ‘‘Now that you have been introduced to the BDX technology, have seen how it can enhance performance, reduce manufacturing costs, and lower the technical risk in the development of future products, I am convinced that you will want to approve our plan to bring BDX technology into our processes.’’ Test the Outline Against the Specification. When the outline is finished, test its completeness and appropriateness by making sure it meets the specification developed earlier. Ask these questions: Does the outline focus on the purpose? Is the audience capable of handling the information? Will the presentation provide all the information the audience needs to achieve the purpose?
398
ORAL PRESENTATIONS
Where the outline falls short, revise it. If the outline provides information not needed for the purpose, eliminate it. Just as a design team conducts a design review before moving into development of a product, so too must the presenter conduct a design review before moving into the development of the presentation.
topic must be offered to different audiences for different purposes, the speaker should prepare different presentations. Develop Presentation Materials. Now that the material for the presentation is in final shape, the presenter should put them in the forms to be used during the presentation. A typical set of materials needed for a presentation are as follows:
Developing the Presentation If this were a technical paper or journal article, the engineer would now write the first draft from the outline, fitting the material gathered earlier into the outline. But since this is an oral presentation, the engineer must put information collected in the first steps of this process into the outline as reminder notes. Fill in Outline with Notes. The engineer should now add information that appears in the gathered material into the outline. If you are using the outliner feature of a word processor, just type the notes right into the outline. It is best to write only words, phrases, specific numbers, or data into the outline to jog the memory about important points the speaker wants to make. Don’t add complete sentences because they will hide the important points during the presentation when there is no time to read. Some speakers make the mistake of writing down everything they want to say and wind up reading their presentation to the audience. This is the very worst kind of presentation because most nonprofessional readers don’t read with good pacing and emphasis and they cannot sustain any meaningful eye contact with the audience. The engineer should look carefully at each piece of information to see if it can best be presented in graphic form. Tables, charts, line drawings, data flow diagrams, block diagrams, pictures, photographs, and cartoons often get the point across better than words. The audience can see the information and often carry that image away from the presentation, whereas words are often forgotten. The outline should show where each graphic piece fits. Incidentally, graphics display the important information needed so the speaker will have to refer less often to notes. Test Your Notes Against the Specification. Read through the outline and associated notes to see how they meet the specification. Don’t try to make adjustments during the first reading of the material. Just get a feeling for how the entire presentation flows or doesn’t flow. Trying to make adjustments during this first reading may cause more problems than are already in the presentation because of the limited viewpoint the presenter has at this point. Once the entire presentation has been read, go through it again, testing it against the specification and make changes as necessary: Eliminate information that is extraneous to the purpose. Research areas that need more information. Ensure that terminology is consistent with the audience’s knowledge and experience. Make sure words and graphics tell the right story to satisfy the needs of the audience to achieve the purpose. When the outline and notes are in the right order and meet the specification, consider what material should appear on the presentation materials. And remember if the same general
Transparencies for an overhead projector, 35 mm slides, flip charts or posters, or a computer-based presentation that is fed to a projector or large monitor. Speaker’s notes Handouts for the audience The materials used for the presentation should be kept as simple as possible. Many organizations have developed templates designed to make all presentations attractive, standard, and effective. The engineer should avoid the temptation to use colors, fonts, clip art, and other devices that seem hightech, but which can detract from the presentation if not used artfully. The rule here should be to keep the focus on the important points of the presentation, not on unusual combinations of color, clip art, and special effects. A well-designed and -developed presentation using black-and-white overhead transparencies is often the most effective. Color, when it is used, should be used to add to the effectiveness of the presentation. For example, providing data in a chart format in which colors allow easy identification of categories, adds to the ease with which the audience can read and interpret information. Most presentation software also prints speaker’s notes and handouts for the audience. These can be very useful to both the speaker and the audience. Speaker’s notes provide an easy way to keep the information flow synchronized to the display. And the audience handouts provide a good place for notetaking and writing down questions for later discussion. Another valuable reason for producing speaker’s notes and audience handouts is to avoid the traps set by Murphy’s Law. If there is only one bulb in the projector, it will blow at the beginning of the presentation. If there is only one room with a presentation projector, someone with more clout than the speaker will requisition it. The speaker’s notes and audience handouts then allow the presentation to go on as scheduled, with the speaker using the notes for reference and the audience following along with the handouts. Rehearsing the Presentation (Testing) At this point in the development of the material, the engineer should consider it a prototype. It must be tested to see if it really works. Will it survive the scrutiny and questioning of a live audience? For most speakers, even practiced ones, nerves start to tingle and fear creeps in around the edges. Many actors and performers admit to having stage fright before each performance. One way to help relieve that anxiety is to rehearse the presentation in front of a friendly audience first. Practice with Friends. Most engineers who must make a presentation are part of a work group with whom they are friendly and relaxed. It is a good idea to give any presentation a dry run with an audience that is familiar. Before starting this first presentation to a friendly group, however, the
ORAL PRESENTATIONS
speaker should review with the group the purpose of the talk, the characteristics of the audience, and the audience needs. This will help forestall comments and questions that are not valid, given the context of the presentation. It will also help them offer better advice on what might be added or deleted. Someone may also know information that is critical to the presentation but of which the speaker is ignorant. Although embarrassing at the time, it will be much less embarrassing than if discovered at the actual presentation. Give the Presentation to a Select Group. As a normal part of the review process for presentations going to high-level managers and executives, many middle managers will ask that the presentation be given to them and their staff members first. This review allows the speaker another opportunity to practice the talk and get more familiar with the material. If a review like this is not required, the speaker should seriously consider asking for one. Not only does this provide another opportunity to have the presentation critiqued, but also gets the speaker more comfortable with the information. Revise the Notes as Necessary. After each practice session, the speaker should review the comments from the audience and make the necessary revisions, but not just because someone said so. The speaker should review the comments with the specification in hand and make only those revisions that help the presentation better meet the specification. Before the real presentation, the speaker should practice the material at least one more time. Visualize the Presentation as Going Well. Finally, when it seems there is not much more the speaker can do to prepare for the presentation, there is one other trick that works well for many people. World-class athletes are often caught on television visualizing their performance before they start into their routine. The Olympic skier will visualize each gate on the course and imagine going successfully and quickly through each one. This technique can work effectively for a speaker also. Spending some time alone with eyes closed, working through the material, seeing the audience responding well to each point, can help a speaker relax and approach the presentation with confidence. If time permits just before the actual presentation to the target audience, the speaker should visualize the presentation. DELIVERING THE PRESENTATION (PRODUCTION) If the speaker follows the process described above, actually making the presentation will seem anticlimactic. The information has been thoroughly planned and developed, the presentation has been through several dry runs, and the speaker is in command of the material. That is at least half, but probably more like 85%, of the battle. The only part of the process left is to run through the material one more time, this time in front of the intended audience. Getting Ready for the Presentation A few more checks will help reduce the nervousness and uncertainty that all speakers feel before making an important presentation. These checks are like making sure all the equip-
399
ment in the factory is up to specification and working properly before starting a production run. Arrive Early and Check the Room. The speaker should arrive at the room in which the presentation will be given at least thirty min before the audience. Make sure the room is laid out properly. Place audience handouts at each position around the table and on chairs away from the table. This will save time and effort once the presentation begins. If the temperature in the room is not proper, set the thermostat to a more appropriate setting. If pencils, pads, and water are to be provided, make sure they are in place. Having some small logistical detail out of place can induce a severe case of jitters in a speaker before an important presentation. Try to anticipate any other problems that might be settled before the audience arrives. Check All the Equipment. The speaker should turn on any equipment to be used during the presentation to make sure it works properly. This can include video cassette recorders and television monitors, computers and associated projectors, the humble overhead projector, light dimmers, and projector screens that are motor driven. The speaker should make sure everything works as planned and he or she is familiar with how to operate it. Have the first overhead or slide on the projector or the computer presentation at the first image so a flip of the switch starts the presentation moving ahead. One hazard that sneaks up on many speakers is a projection screen that won’t stay down. Discovering that at the start of an important presentation can induce a severe case of nerves. Greet People as They Arrive. Most speakers have an increased anxiety level when staring out at the faces of people they don’t know. Standing at the door of the room and informally greeting people and chatting with them helps relieve this anxiety. Greeting the vice president of finance, for example, a person the speaker might not know, is one way to help the speaker realize that the vice president is just another human being. Standing silently at the front of a room slowly filling up with people the speaker has no or little acquaintance with is a sure way to build a case of nerves just before the presentation. Conducting the Presentation The preliminaries are out of the way, and the speaker must quickly get to the material in the presentation. Being confident and competent in the material goes a long way to making the presentation go well. There are, however, a few points to consider about the presentation itself. Start and End On Time. The speaker should not become engrossed in greeting those entering the room and not start on time. The only reason, at this point, for not starting on time is if a key member of the audience has not yet arrived. If the presentation is for the vice president of engineering, the speaker must obviously wait for that person. If the vice president is in the room, but the speaker’s manager is not, start the presentation on time. A punctual start will help bring about a sense of competence and planning in the audience and help the speaker achieve the purpose of the presentation. The presentation should be planned to take less time than allotted so the speaker can answer questions the audience
400
ORAL PRESENTATIONS
may have. When the allotted time is up, the speaker should gracefully bring the session to an end. Mention that there is time for only one more question, ask the person or persons who are the primary audience if there are any more questions before your time is up, or just thank the audience for its attention. If the audience has made decisions based on the presentation, reiterate the decisions to make sure everyone present is clear on what they are. If people or groups have been given actions to accomplish, review those actions and the dates those actions should be complete. Attending to these details provides a sense of closure and purpose to the presentation and allows a graceful exit for the speaker. Plan to Handle Questions. Handling questions during the flow of the presentation has its advantages, but can also destroy the speaker’s well-planned timing. If waiting until the end of the presentation seems best, the speaker should state that at the beginning. That is no guarantee, however, that the audience will accept the request. It is up to the speaker to keep the presentation moving along at a pace to allow it to be completed in the allotted time. If someone in the audience asks a question that will be covered in a later part of the presentation, the speaker should state that and move on. If the audience insists, then discuss the answer, and later, when reaching that part of the presentation, mention that the point has already been covered. Deal Quietly with Hostility. Most speakers will, at some time or another, find someone in the audience who is hostile to the purpose or the speaker. Reacting to hostility with hostility guarantees negative results. The best approach for the speaker is to acknowledge the concerns voiced and move on. If the hostility persists, the speaker should ask the person or persons to meet at a later time, and to let the presentation continue. This approach will often win the rest of the audience to the speaker’s side because most people are embarrassed by unbridled hostility. If the speaker feels during the design and development processes that the whole audience will be defensive or hostile, the approach to the purpose should be changed. Rather than start with the purpose, the presentation should start with material that all can agree on. A statement such as, ‘‘Our company needs to find processes and technology that will lower development and production costs,’’ will help defuse hostility. The presentation should move from universally acceptable statements to statements of fact that help the speaker show an alternate way to achieve those mutual purposes. The speaker should present logical movement from universally acceptable statements in the direction of the speaker’s purpose and recommendations. If the audience can accept the logic of the presentation, hostility should be lowered and a more rational approach to decision making should occur. Learning from the Experience (Evaluation) Once the terror, euphoria, nervousness, or whatever feelings were induced by the presentation subsides, the speaker should review the experience to help with future ones. And be assured there will be future presentations. As mentioned at the beginning of this article, having to make presentations is a fact of an engineer’s working life.
Analyze the Presentation and Its Results. Consider the overall presentation by ‘‘playing back the tape’’ of how it went. Using this evaluation to make future presentations better will help the engineer gain confidence and competence. The engineer should think about the following questions: Did I clearly state the purpose so the audience understood it? Did the presentation focus on the purpose? Did I correctly analyze the audience’s experience and knowledge? Did I correctly anticipate the audience’s needs? Did the presentation meet the audience’s needs in terminology, information, and structure? Did the organization of the presentation help the purpose or get in the way? Was the length of the presentation right, not too short, not too long? What should I do differently for the next presentation? The engineer should make this evaluation within a day or so of the event, while the feelings and sense of how things went are still fresh. A week later, other intervening activities will dull the memory and the evaluation will not be as valuable. Use the Evaluation to Improve. The engineer should make notes during the analysis and refer to them when getting ready for another presentation. Keeping the notes in a separate file on the computer used to design and develop presentations is a good way to see progress. Comparing earlier evaluations to more recent ones can show progress in making oral presentations. Reading about how to make oral presentations in an attempt to get better is no substitute for making them. Reading about how to play tennis or golf is no substitute for getting onto the tennis court or golf course and playing the game. The way an engineer can develop better presentation skills is to give more presentations, using the process described in this article. With experience, design and development time shortens for future presentations. Although it might seem painful, one way to get the practice needed to improve is to make presentations at every opportunity. When the project team must present a progress report to management, the engineer who wants to improve should volunteer to give it. Soon others will defer to the volunteer, letting their own fears of making an oral presentation keep them from learning. The important concept is for engineers to find opportunities to get better at making presentations, to enhance their careers, and to help their company’s performance. Overcoming the fear of making oral presentations can be one of the most energizing ways an engineer can move forward. The skills so acquired can carry over into other community activities the engineer is involved in, making them more enjoyable and rewarding. BIBLIOGRAPHY E. P. Bailey, Jr., A Practical Guide for Business Speaking, New York: Oxford University Press, 1992. D. F. Beer (ed.), Writing & Speaking in the Technology Professions: A Practical Guide, New York: IEEE, 1992.
ORDINARY DIFFERENTIAL EQUATIONS K. W. Houp, T. E. Pearsall, and E. Tebeaux, Reporting Technical Information, 8th ed., Boston: Allyn and Bacon, 1995.
ARTHUR G. ELSER US West, Inc.
401
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5613.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Professional Journal Articles Standard Article Linda Beebe1 1Parachute Publishing Services Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5613 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (192K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases ❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
Abstract The sections in this article are Journal Literature Media Journal Articles Described Journal Submissions Rights and Responsibilities Peer Review Production Process Distribution Access and Archiving Conclusion Keywords: journal articles; engineering papers; professional publication; electronic; print; submission; copyright; peer review; production; publication media; article distribution; archives
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ING/49.%20Professional%20Communications/W5613.htm (1 of 2)15.06.2008 20:26:46
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5613.htm
About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20EL...ING/49.%20Professional%20Communications/W5613.htm (2 of 2)15.06.2008 20:26:46
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering c 1999 John Wiley & Sons, Inc. Copyright
PROFESSIONAL JOURNAL ARTICLES Professionals and scholars communicate with each other and with the public in a variety of ways, both formal and informal. The informal means are myriad: in person, by phone, by email, chatrooms, conferences, exchanges of work in progress, and so forth. The four chief means of formal professional communications are conference proceedings, journals, books, and reference works. In addition, trade magazines and trade newspapers are increasingly important communications media in electrical and electronics engineering. Proceedings feature presentations prepared for a specific conference. These papers, generally prepared in advance of a meeting and rarely edited, often are the engineer’s first presentation of a discovery or an application. To be published in a journal, this work must be polished and documented, perhaps further researched. Although journal articles are expected to be state-of-the-art work, they must also be rigorously substantiated and carefully drawn. Once scholars have tried out their work in journal articles and had the benefit of responses, either published or unpublished, they may be ready to compile their information for a full-length book or a chapter in a multiauthored book. Whereas journal articles are expected to be original, books generally present established ideas or practices. Reference works, such as encyclopedias or other compendia, distill the established work further into material that will serve as a longer-term guide for a broader audience. Trade publications, particularly in fields that are changing rapidly, deliver technology summaries, information on new developments, and pointers to other work. They are, however, less rigorous than professional journals. This article addresses journal articles, the purposes they serve, their components, how they are selected, and how they are produced and disseminated. Although it is not a how-to guide to writing journal articles, the author hopes that it will be helpful to aspiring authors, researchers, and practitioners as they evaluate the literature they read and develop their own contributions to the engineering literature.
Journal Literature Media Until the mid-1660s, scholars communicated with each other in correspondence, often forwarded from one to another, to share new discoveries and queries. In early 1665, almost simultaneously, the first two scientific journals were published. In France, the Acad´emie des Sciences published the first issue of Journal des S¸cavans in January, and in England the first issue of Philosophical Transactions of the Royal Society of London was released in March (1). From that point to the late twentieth century, print journals were the sole delivery medium, although electronic abstracting and indexing services came on the scene in the mid-1970s. Print Journals. Advances in technology made print journals possible in the late seventeenth century. More than 200 years had passed since Gutenberg first adapted an olive press to use movable type for printing. Not only had printing progressed; improved roads and efficient postal systems made dissemination easier. In the three intervening centuries, technological discoveries have continued to enhance the production and distribution of scholarly and technical journals. Today, many engineering journals are published in four-color process on glossy paper, and they look remarkably like consumer magazines. Because of improvements in 1
2
PROFESSIONAL JOURNAL ARTICLES
production and mailing procedures, they also reach their readers more quickly, albeit not as quickly as many would like. Print journals continue to be the preferred medium for professionals and scholars, despite many predictions that electronic journals will totally replace them (2, 3). Their portability and ease of use are two reasons. To read a journal in print requires no electricity, equipment, or training. In addition, Schaffner (4) noted that the order and format of the printed page—abstracts, heads and subheads, references, and so forth—help the reader absorb the content. The electronic journals available as we approach the twenty-first century do not offer the same tactile pleasures of holding paper, scribbling in margins, underlining, and generally making the content your own. Further, as Lieb (5) said, print journals emphasize community building and dialogue with letters to the editor, essays, book reviews, and other general information that are still missing in electronic journals in the late 1990s. He suggested that electronic journals are currently more mausoleum than salon, despite their potential for much greater interactivity. Electronic Journals. Moving journals into electronic media opens up many possibilities for interaction and greater functionality. Their speed of delivery makes them ideal for alerting scholars and professionals to new discoveries and research. They support far more sophisticated graphics to aid understanding. Whereas print journal pages are flat surfaces, the electronic journal can feature graphics in three dimensions and animation, and it can include video, audio, and other multimedia. Searching is far easier in an electronic journal. In a giant leap from the print reference list, an electronic journal can feature dynamic links to citations, to underlying data, and to other works in progress. However, electronic journals are in their infancy at the end of the twentieth century. In 1999, electronic journals constitute only a small portion of the total journal literature, and nearly all are electronic versions of a traditional print journal. For example, the Journal of Technology Computer Aided Design is IEEE’s only solely electronic journal (see http://www.ieee.org/products/online/journal/tcad), and the Institute of Physics produces the New Journal of Physics (see http://njp.org/). Although there is no question that the number of electronic journals will grow rapidly, they are likely to be an adjunct to print, rather than a replacement. Meyers and Beebe 6 posited a “multiformat” future in which readers use electronic journals for current awareness and links to other works and print journals for reading and reflection.
Journal Articles Described Characteristics of Journal Articles. A professional knowledge base cannot exist only in the minds of its practitioners. For a field to grow, scientists must find a means to exchange technical information broadly, to record knowledge permanently, and to describe the profession to the community at large. Since 1665, when the first scientific journals were published, the chief medium for scientific communication has been the scholarly journal. Bishop (1, p. 4) asserted that the heart of scientific literature is the primary journal . . . “upon which all the other parts depend and from which they are derived.” According to Osburn 7, the journal is the “ . . . key instrument of science, the social sciences, and of much of the humanities for the assessment and validation of an individual’s work.” Journal articles, in general, reach their audience much more quickly than other formal publications, and, most importantly, they provide a forum for open information exchange that is not possible with other means of publication. Journals create a stage for dialogue among scientists. Originality. Unlike books, which typically provide established ideas or basic information, such as that found in textbooks or manuals, primary journals publish new materials. Articles in scholarly journals are expected to report original results that have not been published elsewhere. Publishers of journals in electrical and electronics engineering state specifically in their guidelines that submission of a manuscript constitutes a declaration that the paper has not been published anywhere else and is not under consideration by any other journal. Some authors have interpreted the requirement to mean not published in their field, and they have learned to their dismay that “not published elsewhere” means precisely what it says. If a paper has been
PROFESSIONAL JOURNAL ARTICLES
3
published anywhere—in the New York Times, in a journal published in another language, or in a popular magazine—it is not eligible for publication in a professional journal. The requirement for originality has several implications for authors. The first is that one paper, a specific arrangement of content and words, should not be replicated. Beyond that, authors are also expected to take care not to duplicate their own previous work or the work of others. The terms “salami publishing” and “least publishable unit” describe an unsavory habit some authors have of slicing their research studies into tiny pieces to try to get credit for more publications than the research results warrant. Similarly, some authors stitch together review articles that simply report the work of others without adding their own analysis that would move the science forward. Both practices violate the rules of originality in journal publishing. On the other hand, graduate theses are often the source of journal articles. This use is perfectly legitimate so long as the author uses the research as a basis for additional analysis and recommendations. Likewise, a review or tutorial article that surveys other published contributions does not violate the rule of originality if the author contributes some conclusions. There can also be some genuine confusion about what prior publication entails. Throughout history, scholars have tried out their ideas in various discussions and exchanges with their colleagues. The most common methods have been conference presentations and circulation of drafts to other scientists. In the 1990s, the Internet offers authors opportunities to obtain feedback from a broader group before they actually submit a paper for publication. For example, they may post a draft manuscript on their Web site or, in some fields, submit it to a preprint server that offers wide distribution. As publishers scrutinize these distribution methods more carefully, authors may find that inclusion in a preprint server makes their paper ineligible for a scholarly journal. Guernsey and Kierman (8) reported that many publishers are declining to review manuscripts circulated on preprint servers. Significance. A key criterion for acceptance in a professional journal is that the information have significance for the field. The primary journal article is expected to advance knowledge and build a base for future contributions to the literature. Scope and aim statements for some engineering journals use such descriptors as “vital,” “evolutionary,” and “important” to highlight the level of significance they seek. Even if the research is flawless and the article beautifully written, a paper is likely to be rejected if the results are trivial. Efficiency. Brevity, clear lucid prose, and the lack of extraneous information all contribute to the efficiency of an article. Although article lengths vary considerably from one journal to another, few journals accept papers more than 40 manuscript pages in length, and some have outside limits of six or eight pages. Guidelines for many journals point out the diversity of their audience and cite the need for clear, accessible language. Rather than adding to scholarship, turgid, ponderous language simply interferes with communication. Likewise, extraneous information, such as material that can easily be found in other publications, often adds length without contributing to knowledge development. The structure of the article, with the correct placement of the correct elements, is also a key to efficiency. Efficiency is increasingly important as the total amount of scientific information grows ever more rapidly. Scholarly Rigor. It is not sufficient for authors to demonstrate what they know; they must also show how they learned it and why their contribution is unique. Some form of systematically gathered evidence is required. Although a journal may publish theorems without proofs, it will want the author to point the reader to where the proof is published—possibly in an appendix to the article. For laboratory and field experiments, journals generally want sufficient detail that the reader can be assured that the experiment could be replicated in order to validate the results. The facts should include the limits of accuracy and the degree of precision, any failures, and information that supports the validity of the method used. Ethical standards as well as scholarly rigor come into play with documentation, as it is it necessary to credit all statistics and the words, thoughts, and findings of others. Authors are challenged to show that they have approached their work in a systematic way and that they have built on the work of others to expand the knowledge base. Accurate bibliographic citations also direct readers to sources of additional information.
4
PROFESSIONAL JOURNAL ARTICLES
Types of Articles. In general, electrical and electronics engineering journals publish three types of articles: application articles, research articles, and review or tutorial articles. Most journals also include letters to the editor, some of which may be quite substantive, and some have different departments, such as commentaries (opinion pieces), book reviews, and interviews. Application Articles. Many journals encourage these practice-related papers that demonstrate how an application can be used. Authors must supply a sufficient level of specificity so that the results can be replicated. Research Articles. Authors are expected to describe the problem that prompted the research and point out the gaps in current knowledge. They must describe their approach to solving the problem and detail the methods they used in their research. Using the data they gathered, they must analyze the results, tie them to the gaps they noted earlier, and cite implications for future work. Review or Tutorial Articles. Review articles analyze the state of knowledge in a certain aspect of a field of knowledge, summarize what is known about it, and explain why it is important. If the work is highly theoretical, the author must be as specific as possible about its relevance. Some journals require that a substantial portion of any paper be tutorial in nature. That is, the author is expected to provide early background material, a summary of the state of the art, and an evaluation of how the current project contributes to the knowledge in the field. Components of a Journal Article. At the most basic level, all journal articles, regardless of their type, must have a beginning, a middle, and an end. In the beginning the author sets the stage, defines the problem, and describes the purpose of the article. At the end, the author summarizes the article, provides implications, and points to other work. The middle is somewhat elastic and varies with the subject and type of article. A title and an abstract are also requirements for journal articles. Title. Journal guidelines generally call for titles that are descriptive, compelling, and as short as possible. Even if they have a tendency to try to tell the whole story in the title, authors have two good reasons to consider the guidelines carefully. First, reviewer reaction to the title can influence recommendations for publication. Second, once an article is published, a well-chosen title will lead users of tables-of-contents services and other automated databases to find the article more easily; hence, it will increase the potential for wide readership. Abstract. Nearly all journals require that authors submit an abstract that describes in brief what their article contains. Abstracts published with articles are helpful to readers in determining whether they need to read the entire article. The abstracts are also published in various abstracting and indexing services that lead other readers and scholars, often from multidisciplinary audiences, to work on the same subject. In addition, the reviewers and editor take the quality of the abstract into consideration in evaluating the manuscript. In journals in electrical and electronics engineering, abstracts are generally limited to about 150 words. They do not contain equations, figures, or tables. The object is to provide a succinct, accurate representation of the contents of the article. What was the problem? How was it studied? What were the results? Abstracts should cover the purpose, the problem, the methods, the results, and the conclusions. Key Words. Many journals request that authors submit as many as 10 key words that will be published with the article and used to facilitate searches. To assist authors in assigning appropriate key words, IEEE maintains a thesaurus of key-word indexing terms that can be found at http://www.ieee.org/power. The ACM Computing Classification System, which many computer journals use, can be found at http://www.acm.org/class. Introduction. Although the introduction is just one segment of the full text of an article, journal editors place such importance on a well-crafted introduction that it warrants special attention here. In the introduction, the author must set the stage for the article, describing its purpose and providing sufficient background that the reader understands the need for more knowledge on the subject and its significance. For a theoretical article, authors need to link their theory to familiar concepts. If the article is building on previous research, the author must delineate the strengths and limitations of previous studies. Authors proposing new techniques must make a strong case for the improvement expected.
PROFESSIONAL JOURNAL ARTICLES
5
Body of the Article. How the text of an article is organized depends on the type of article and its subject matter. Regardless of how the article is structured, the organization should be made clear to the reader through a series of subheads that form the bones of the article. An arrangement of primary and subordinate headings helps the reader understand the progression of the article. Some journals prefer that research articles follow the IMRAD (introduction, methods, results, and discussion) method of organization. Other journal guidelines suggest that headings should be descriptive of the specific content of the article. Because many journals have stringent length limits, authors are compelled to focus on the most important aspects of the work in question. All components, including any illustrations, must advance the premises the author puts forth. At the same time, authors must take care to state clearly all assumptions on which they base their results, and they must explain any unusual symbols or terms. In the conclusion, the author summarizes the work, explains any limitations, describes the implications, and lays out plans for future work. References. The reference list contains all of the information a reader would need to find the resource the author cited in the text. Consequently, each citation in the reference list includes the last names and initials of the authors, the title of the article or other publication, the name of the journal in which an article appeared, the volume number, the page numbers, and the year of publication. For books, references include the publisher’s name and the city in which the publisher is located. Reference lists include only those documents that a reader could reasonably be expected to find. No ephemeral documents, such as personal correspondence or unpublished works, are included in the list, although they are cited in the text. There are essentially two styles of bibliographic citations. In the name-and-date style, references are cited in the text with the author’s last name and year of publication in parentheses, and all references are listed at the end of the article in alphabetical order of the author’s last names. In the numerical system, the text contains a number in superscript, parentheses, or brackets; then the reference list at the end of the article is ordered as the references fall in the article. Illustrations. Photographs, figures, tables, and other illustrations may add to the clarity of presentation. The danger lies in using three tables when one would suffice or adding figures for decorative rather than communicative effect. On the other hand, dense paragraphs of text with no graphic relief can be daunting to readers. Authors must judge what illustrations augment and clarify the information they are trying to convey. Other Components. It is customary for authors to acknowledge the support and any contributions of colleagues who did not participate in the actual research or writing of the article. An acknowledgment section may also include a statement on any funding support. In addition, many journals request a brief author biosketch that generally includes the author’s title, institution, mailing address, and email address.
Journal Submissions How Authors Select Journals. Within the field of engineering, there more than 800 scholarly journals published in the United States (9). However, the degree of specialization is so great that most authors have a much narrower field from which to choose. Some authors, in fact, may work in such a specialized area that only one or two journals may publish their work. Others have a broader range of options and may have the luxury of considering a number of factors before they select a journal. Mission and Audience. Good authors are avid readers of the literature in their field; consequently, they begin their search for a publishing outlet with a clear understanding of the likely journals. Savvy authors also scrutinize the journal Web sites and take careful note of the aims and scope statements as well as the descriptions of the audience the journal is trying to reach. Journals, for example, may seek descriptions of new techniques or products. They may want to publish data that will be useful to engineers or researchers concerned with circuits, materials, or software. Published journals demonstrate the subject matter and level of writing the journal has accepted to date. The Web sites and published author guidelines, which may include
6
PROFESSIONAL JOURNAL ARTICLES
announcements of special theme issues, give further clues about what the journal editor and editorial board are seeking. Sometimes authors can consider whether they wish to reach a broader, more generic audience, which is likely to be larger, or whether they want to publish for a specialized, perhaps very technical audience, which is likely to be smaller. Authors may also consider what Day (10) described as the prestige factor. Disciplines generally identify prestigious journals by who the publisher and the editor are, who serve as members of the editorial board, and how rigorous the review process is. Often the journals considered prestigious are those with lower acceptance rates. An easier test for authors is to determine which journals have published the articles they consider important. The journals that published the works the author is citing in this new article are likely candidates. Odds of Acceptance. In some disciplines and specialties, it is easy to obtain average acceptance rates for various journals because they are published either in the journal or in guides to journals in the field. The size and frequency of the journal are other indications of the odds of acceptance, particularly if the journal maintains a short lag time from acceptance to publication. The quarterly journal that contains 128 pages can publish far fewer articles in a year than the monthly journal that has 224 pages. Time Factors. Authors experience lag times in their quest to get into print in three areas: (1) the time it takes to review the article: (2) the time it takes them to revise the article according to the reviewers’ comments; and (3) the time from acceptance to publication. They have influence only on the second factor. In general, journal guidelines cite review times ranging from two months to six months. The longer time is probably related to multiple reviews, sometimes requested serially when the editor requires an additional review to make a decision, as well as to reviewers being slow to return their evaluations. IEEE (11) ecifies that editors should ensure that reviewers return comments and opinions within 30 days maximum. In looking at lag times for the 151 articles published in the IEEE Transactions on Geoscience and Remote Sensing in 1997, Raney (12) found that the average time from submission to publication was 15 months, 10 months of which was consumed by review and revision. He noted that a scan of other similar journals revealed similar lag times. Many journals now publish the submission and review time history with articles. By reading these footnotes for a journal of interest, an author can easily estimate an average lag time. Access. In the highly technological field of electrical and electronics engineering, assuring multiple points of access to journal articles is not such an issue as it may be in other fields. However, there are still variations in the level of access journals offer. Authors will look first at where the journal is abstracted and indexed. Most journals publish that information in their issues and list it in their author guidelines. If the journal is not published in electronic form, the author will consider whether tables of contents and abstracts are published online. Access through document delivery services, such as Bell & Howell Learning Information’s ProQuest, INSPEC from the Institution of Electrical Engineers (http://www.umi.com/hp/Features/Inspec), UnCover (http://uncweb.carl.org), or the Canadian Institute for Scientific and Technical Information (CISTI), (http://www.cisti.nrc.ca/cisti/docdel/docdel.html), is another consideration. Publishers also offer access through their own systems. For example, ACM maintains a library of full-text articles in the ACM Digital Library (http://www.acm.org/dl). The IEEE/IEE Electronic Library includes all IEEE journal articles, conference proceedings, and standards published since 1988. Articles in journals published by John Wiley & Sons are available through Wiley Interscience (http://www.interscience.wiley.com), and those published by Elsevier through Elsevier’s ScienceDirect (http://www.sciencedirect.com). Providing linkages from one reference to another across publishers is likely to be an increasingly important aspect of access for the author and the reader. Difficult Decisions. Sometimes an author must make a choice between timeliness and prestige, or even quality. For example, electrical engineers often must choose between presenting their work at a conference and writing a journal article. The conference is more immediate, it offers the opportunity to interact with colleagues and get instant feedback, and it may assure the author of a trip to an enjoyable location. In addition, the criteria for conference presentations are generally less rigorous than journal criteria. On the other hand, proceedings often do not receive the same level of editorial and quality control attention that journal articles
PROFESSIONAL JOURNAL ARTICLES
7
do. The audience is more restricted, and the proceedings usually are not included in abstracting and indexing services. So the author may sacrifice greater professional recognition for “instant gratification.” To obtain both immediacy and prestige, the author must invest the effort to create two sufficiently different products to meet the journal requirement for originality. Requirements for Submission. There are four components to the submission of a scholarly article in electrical and electronics engineering. They include copyright transfer, paper copies of the manuscript, an electronic file, and illustrations. For engineers working in corporations, company approval is a fifth component. Copyright Transfer. Journal publishers require that authors transfer their copyright to them before they will review a manuscript. Although the copyright forms vary from publisher to publisher, in general the author affirms the following: • • •
Transfer of copyright Originality of the work Authorship.
If several authors are involved, either all authors must sign or one author must attest to having the authority to sign for all. If the article is not accepted, the agreement is considered null and void, and copyright reverts to the author. Journals often publish their copyright transfer form in the journal. In addition, all of the major publishers of electrical and electronics engineering journals post their forms on the journal Web sites. Paper Copies. Journals still request paper copies, even though electronic files are required. The number of copies varies from journal to journal, ranging from two copies to six. Specifications for paper copies include double-spaced copy, generous margins (one inch to one and one-half inch margins on all sides), and clear readable fonts. Most journals request a cover sheet or title page that includes the title of the paper and the names and addresses of authors. The abstracts and key-word paragraphs are separated from the body of the article and immediately follow the cover sheet. If a list of symbols is required, the list follows the abstract. The text of the article is next, and following it the references. Tables and figures are provided separately. Authors should also include acknowledgment of any grant or other financial support of the work, as well as a note on any conference presentations that preceded the writing of the paper. Electronic Files. Although some journals request electronic files only upon acceptance, many request them at the point of submission. Some journals accept email (generally only once the editor has requested it), but many request that authors send disks. If manuscripts contain extensive mathematics, journals often request that files be sent in the TeX, LaTeX, or troff programs. Some journals accept files in Word or WordPerfect; others request that all files be sent in ASCII. Many journals specify that the following should be avoided: • • •
Page layout software (such as FrameMaker, PageMaker, Quark, or Ventura). ASCII is generally preferred. PostScript files Special macros. Standard program codes are preferred.
Just as different components start on separate pages in paper copies, the electronic version should include separate files for the various elements, such as abstracts, body, references, illustrations. It is particularly important to publishers that graphics be separated from text. Many journals provide templates to help authors prepare electronic files. For example, Springer offers a template for its preferred Word program at http://www.springer.de/author/index.html. Authors can obtain an IEEE LaTeX style file by emailing
[email protected]. Files should be labeled according to journal instructions, most of which can be found on journal Web sites. Illustrations. Journals have specific requirements for graphic widths; authors should check guidelines to obtain specifics. Most journals request that graphics be prepared using specific fonts. Because illustrations
8
PROFESSIONAL JOURNAL ARTICLES
must be camera-ready, they constitute an exception to the software to be avoided. For example, IEEE (11) asks that all graphics be submitted in PostScript, Encapsulated PostScript (EPS), or Tagged Image File Format (TIFF). Authors need to know whether a journal will print in color or black and white in order to submit the appropriate file. It is likely that most line drawings will be black and white. If an author is using photographs, they should be glossy prints with no screens. Most journals will not accept laser prints as replacements for photographs or gray-scale graphics. Because graphics probably will be reduced, the lettering and other details should be large enough to be legible when the graphic is reduced to fit the column or page width of the journal. Journals request that authors put captions in the file, not on the artwork. Corporate Sign-off. Approval for public release of information is required before an author can submit an article, and multiple sign-offs may be necessary. For example, the company export approval officer may review the article for infringement of restrictions on export of technical information, because most journals have international distribution. If the work was funded by an external customer, the customer’s representative must approve the article; and if the work is related to any classified government work, the company security officer will be involved.
Rights and Responsibilities The issues of who owns what and how materials may be used have become more complex in the digital age, as technology has made it easier to transfer or copy materials. These complexities may increase what Grycz (13, p. 73) described as the tension between the author who wants the widest distribution possible and the publisher who “must be concerned with the economic viability of publishing as a business.” Current copyright law is encoded in the Digital Millennium Copyright Act (14) approved by the United States Congress and signed by President Clinton in October 1998. This law, the first update of US copyright law since 1976, implements two international treaties: the World Intellectual Property Organization Copyright Treaty and the World Intellectual Property Organization Performances and Phonograms Treaty. The law, which had been vociferously debated between publishers and librarians, adds protection for online publications, defines penalties for infringement, and includes protection from huge liabilities for innocent infringement. Journal Rights. By law, the author owns copyright automatically from the point of creation. However, all electrical and electronics engineering journals require that authors transfer their copyright to the publisher as a condition of publication. If the rights belong to the author’s employer, journal publishers either assume that the author is empowered to sign the copyright transfer, or they require that an official of the employer sign the form. In general, rights are interpreted to mean the right to reproduce and distribute the work in any media, in any language, and in any part of the world. In other words, except for the rights that the publisher specifically cedes to the author, the publisher owns all rights. Publishers note that they must control rights in order to be assured that free distribution will not erode their financial stability and hence their ability to continue publishing. Further, they believe they are best equipped to ensure that t author receives appropriate attribution and that the work is not cannibalized. Author Rights. Transferring the copyright does not affect patents or trademarks. The IEEE copyright transfer form (15) specifies that “Employers (or authors) retain all proprietary rights in any process, procedure, or article of manufacture described in the work.” Among the retained rights a publisher may grant to the author are the following: • •
Right to post on the author’s Web site, provided it is not a commercial venture Right to use all of the article in another work, such as a book, that the author edits or writes, without payment of a permissions fee
PROFESSIONAL JOURNAL ARTICLES
• •
9
Right to make copies for distribution within the author’s place of employment Rht to make oral presentations.
In any instance in which the published article is reproduced, publishers require that the copies include their copyright notice and a full bibliographic citation. Some publishers require that the article posted on the author’s Web site be linked to the publisher’s Web site for ordering information. Generally, publishers will permit any distribution of an accepted article only after official publication. Responsibilities. All parties in the scholarly endeavor have responsibilities to respect and protect intellectual property rights. The National Federation of Abstracting and Information Services (NFAIS) (16) has identified a series of rights and responsibilities for authors, producers, distributors, and users of database products (see Table 1). Author Responsibilities. Authors are expected to submit a manuscript to only one journal at a time. Considering the time the review process consumes, many authors chafe at the restriction; however, there are two good reasons for it. First, the practice significantly reduces the potential for one journal inadvertently to violate another’s ownership. Second, editors have reasonable assurance that they will be free to publish an article in which they invest considerable time and expense for the review process. As noted earlier, authors are expected to credit the words, thoughts, and findings of others and reference all statistical data. Careful recordkeeping may be required to avoid inadvertent plagiarism. Authors must also
10
PROFESSIONAL JOURNAL ARTICLES
secure permission for the use of any tables or figures. Permission from the copyright holder is also required for any lengthy quotation; some publishers now require permission for the use of 250 words or more. The decision of who should be listed as an author on a publication is an ethical one. Only two roles qualify someone to be identified as an author: The person either made a substantive and major contribution to the work that is described in the article, or wrote a major section. Being head of a department in which the work was completed or advising a graduate student or new professor does not entitle an individual to be listed as an author. Editor and Publisher Responsibilities. Editors must assure that manuscripts receive fair, objective reviews within reasonable timeframes. They must make every effort to ensure that the author’s work is protected throughout the process from receipt to publication and distribution. Consequently, they must hold reviewers responsible for maintaining confidentiality and providing unbiased reviews. When a manuscript is being prepared for publication, the publisher has an obligation to give the author an opportunity to review edited copy and to update the article if there has been a substantial time lag between acceptance and publication. After a manuscript is accepted, publishers have an obligation to ensure that potential readers have access to the published work. Publishers must protect the author’s work in all copying and document delivery agreements so that the work is always attributed properly. An increasingly important issue is the responsibility of the publisher to preserve the work so that future researchers and practitioners will have access to it. Reader Responsibilities. Readers include authors and editors, who have a vested interest in maintaining the integrity of published materials. All readers have a responsibility to attribute works appropriately and to request permission to republish any illustrations or lengthy quotations. They also have a right to make copies under some circumstances. With the Copyright Act of 1976 (17), legislators established the doctrine of fair use, which was intended to balance the rights of the copyright owner with the benefits to society of the free distribution of ideas. Under Section 107 of the law, materials may be used in limited ways for purposes such as comment, criticism, parody, news reporting, teaching, scholarship, and research. The law, rather than strictly defining fair use, set forth four factors that can be used to determine fair use. They are • • • •
Purpose and character of the use Nature of the copyrighted work Amount and substantiality of the portion used in relation to the copyrighted work as a whole The effect of use on the potential market.
Thus, someone can quote 100 words of a 10-page article with attribution, but cannot use one line of a poem without asking permission. The Digital Millennium Copyright Act applies fair use to online products as well. Fair use was debated hotly even before the advent of online journals and Internet access. In the 1990s, the debate intensified, with information providers on one side fearing totally free access that could destroy their viability, and librarians and their patrons on the other alarmed at increasingly high costs and the potential of making multiple payments for the same content. Digital networked environments in libraries and academic institutions and computer-assisted distance education raise new issues related to copyright. Can I make a paper copy of a digital file? Can I distribute copies to the class I am teaching? Can I email the file to a colleague? Can I post it on my Web site? Why cannot I buy an electronic product and send it to a colleague after I’ve used it, if I can loan or give away a book I purchased with no problems? Nearly everyone agrees that any reader has the right to make a first generation copy for his or her own personal use. The classroom use is probably acceptable if the professor found that article shortly before the class and does not reuse it on a regular basis. The reader should probably provide access information to a colleague, rather than the actual document. And it’s likely that no one would agree to posting a published document on a personal Website without express permission.
PROFESSIONAL JOURNAL ARTICLES
11
The mere fact that one gives a product to someone rather than selling it to them does not mean that copyright is not violated. Concerns about maintaining the current doctrine of first sale, which permits buyers to distribute a purchased product in any way they choose, in a digital environment may be resolved only with emerging technologies. The differences between print products and electronic products and how rights are handled for them are subjects of a debate that is likely to continue well into the twenty-first century.
Peer Review In its simplest form, peer review occurs when one colleague asks another to read and comment on a paper in draft. At its most elaborate, peer review comprises a highly systematized set of protocols for evaluating and selecting papers for publication. The complexity of peer review increased as the sciences grew more complex. Whereas editors of early scientific journals appear to have made publication decisions on their own, most editors in the late twentieth century would hesitate to evaluate the range of papers they see without having advice from subject experts. Sometime in the mid-nineteenth century, the societies that were publishing the major scientific journals, such as the Royal Society of London and the Acad´emie Royale de M´edecine, began to develop mechanisms for peer review. In the United States, as funding for science and technology grew, the need for selection criteria for grants and publications increased. Purpose. Peer review is first a gatekeeping function. An editor asks peers of the author, generally called referees or reviewers, to comment on the originality and significance of the author’s work and to advise the editor if the work warrants publication. Some of the criteria reviewers may be asked to use include validity and accuracy of results, evidence that supports the author’s conclusions, and adequate reference to previous work. Given that the purpose of primary journals is to provide a reliable record of knowledge, this gatekeeping function is critical. Second, the process is intended to improve the literature and provide assistance to authors. Reviewers generally are asked to write comments to the author, detailing as explicitly as possible how the paper might be improved. For example, the author may have overlooked important contributions of other authors in the field, and the reviewer will supply references. There may be errors in computation or faulty use of methodology. Perhaps the abstract does not accurately represent the content of the paper. The reviewer may have searched in vain for a statement of purpose. Or the presentation and the language used may need improvement. Reviewers are expected not just to point out problems, but to offer some suggestion for how the author might resolve the problem. How the review is structured is generally the editor’s prerogative. Some editors provide very specific guidelines to reviewers. Others give no direct instructions and ask only that the reviewer provide a critique and a judgment on whether the manuscript warrants publication. Process. When a manuscript arrives in the editorial office, it usually is first screened to see if it meets some basic criteria, such as being appropriate to the mission and scope of the journal or meeting length limits. The screener may be the editor of the journal or editorial staff depending on the journal. Next the manuscript is entered into the journal records. Beebe (18, p. 161) noted that the attention to detail required to produce high-quality journals begins when the manuscript arrives. “From that point, the person responsible must be able to tell exactly where the manuscript is in the process and what steps at what times it has gone through to get to that stage.” Automated systems that generate control numbers and track manuscripts, reviewer responses, and editor decisions are a must for any journal that receives a substantial number of manuscripts. These systems generally will also produce a list of suggested reviewers for a manuscript based on key words and reviewer availability. The Reviewers. Journal referees are subject experts, and most journals require that they be published authors. For primary journals all reviewers are volunteers who provide services without compensation, al-
12
PROFESSIONAL JOURNAL ARTICLES
though many reviewers say the knowledge they acquire from reading the work of their colleagues is more valuable than money. Some corporate journals offer ahonorarium to external referees. Reviewers are bound by strict rules of confidentiality. They are expected to use the content of the papers they review only to evaluate it for possible publication. They may not cite the work, use it for teaching, or build their own work around it until it appears in print. Reviewers are asked to provide unbiased, objective evaluations and to offer constructive, helpful comments to help the author improve the work. Further, they are to do so within a fairly short time. IEEE guidelines for editors (http://www.ieee.org/pubs) state that reviewers must agree to return their comments and opinions in 30 days or less. Types of Reviews. Reviews may be categorized as double-blind, single-blind, or open. In a double-blind review, the author does not learn the identity of the reviewer and the reviewer does not know the author’s name. In a single-blind review, the reviewer’s identity is masked to the author, but the reviewer is told who the author is. In an open review, both author and reviewer are known to each other. Proponents of double-blind reviews believe that strict anonymity precludes bias and improves the quality of the review, whereas supporters of open systems believe that reviewers write more helpful, less judgmental reviews when they are signing their names to them. Open reviews are the least common in all disciplines, although there appears to be some increase in open reviews. For example, the British Medical Journal now places some submissions (not including those that have direct implications for health care) on its Website (http://www.bmj.com) for open comment at the same time they undergo traditional peer review. For the most part, however, reviews tend to be masked in some way. Some journals in electrical and electronics engineering practice double-blind reviews, and others use the single-blind process. Criteria. Editors state their criteria in various ways, but overall the general criteria are the same. They include the following: • • • • • •
Relevance to the Journal. For some journals, this is the key criterion. If the content does not mesh with the mission and scope of the journal or if the reviewers believe it would not be of interest to the journal’s readers, the manuscript is likely to be rejected. Significance. A publishable manuscript is expected to contribute to building knoedge in the field. Originality. Primary journals publish only new work. Technical Quality. Journals wish to publish only those articles that are technically sound. Quality of Presentation and Writing. Journals look for clear presentation and a writing style that is free of jargon and redundancies. Relationship to the Literature. Authors need to demonstrate that they are aware of and are building on the work that other authors have accomplished in the field. Other criteria vary with the type of article. For example, reviewers of a research article will look for sound methodology, appropriate study design, and high-quality analysis.
Most journals provide their reviewers with either a formal rating sheet or written guidelines for reviewing manuscripts. As is true in most disciplines, there appears to be little formal training for reviewers or editors of electrical and electronics engineering journals. Reviewers are expected to recommend a disposition for the manuscript they review. Among the possible fates are the following: • • • • • •
Accept as is. Request minor changes. Request major changes. Shorten for publication as correspondence. Refer to another journal. Reject.
PROFESSIONAL JOURNAL ARTICLES
13
It is the editor who makes the decision that will be transmitted to the author. When the reviewers are in general agreement, the editor’s decision may be easy (unless he or she disagrees with the reviews). However, when faced with conflicting reviews, the editor must either decide which review to follow or ask for further reviews, which may also be contradictory. Journal guidelines may provide for an author rebuttal process, as the IEEE guidelines do (11). In the IEEE process, an author may submit a “suitably worded” argument against the criticisms of the reviewers and the subsequent editorial decision. The argument and responses from reviewers give the editor additional information on which to base a decision. The ultimate decision, however, is the editor’s. The editor also determines what the author will receive with the decision. Some editors always send reviewer comments as they were received. Some edit them, and some provide only a summary of the comments. And in some instances, an editor may deliver a decision without sending any comments from the reviewers. Factors other than quality and significance may influence a decision on publication. Some journals will publish several articles on a subject as long as each offers some new approach. However, journals that emphasize breadth may decline to publish a manuscript because it has already published or accepted another on the same topic. In addition, some journal editors seek different types of balance, such as balance between theoretical and applied articles, or between academic and corporate papers, or between domestic and foreign authors. Evolution of Peer Review. Peer review has come under heavy criticism over the years. Complaints include bias, other breaches of ethics, delays in publishing important information, and a variety of other offenses. The biomedical field, prompted by several cases of fraud in the 1980s, convened three international congresses on peer review in the 1990s, the third taking place in Prague in September 1997. Miller and Serzan 19 called for the establishment of standards for refereed journals, and they suggested a number of procedures to improve the process of peer review, including increasing the number of screeners and publishing evaluation guidelines. Providing clear guidelines to reviewers and conducting training would likely improve the process as well. Regardless of the flaws in the system that reflect human frailty, peer review is the method by which journal articles find their way into published status. In the early and mid-1990s, authors in various fields proclaimed that the Internet provided the opportunity for self-publication and market evaluation; hence publishers and reviewers were no longer needed. By the late 1990s, the clamors had subsided, and even the electronic preprint servers were considering an evaluation process. Peer review will undoubtedly continue to evolve as it has over the past 150 years, but it will likely not be replaced by self-publication.
Production Process Preparation of Copy. Using the author’s final submission, the publisher will prepare the copy for reproduction and distribution. Publishers who still employ copyeditors will improve the language and organization to make the content more accessible to readers. Among the editor’s tasks are to correct grammar, spelling, and punctuation, and to check consistency of words, abbreviations, and numbers. The editor will ensure logical organization and make certain that computations in tables are correct. In addition, the editor will verify that references in the reference list correspond with those in the text and call any discrepancies to the author’s attention. Often, changes are made to conform to the style manual used for the journal. The author will then receive edited copy, along with queries to be answered. Although online editing is increasingly common, some copyeditors still work with paper copies. Authors are expected to review edited copy carefully, answer all queries, and complete any missing tasks such as outstanding permissions. This clearance process is generally the authors’ last opportunity to make changes without cost to them; consequently, they should review the copy with great care. Graphics can be particularly troublesome. Because authors often submit art that cannot be used as is, the publisher may rework the graphics and edit captions on
14
PROFESSIONAL JOURNAL ARTICLES
figures and charts. Wiley distributes guidelines for checking copyediting and proofreading book manuscripts (www.wiley.com/authors/guidelines) that describe the clearing process. This process is not uniformly used. Many journals in the 1990s no longer copyedit manuscripts. Instead the publisher expects the author to put the polished article into final form, if not the actual format. And some of the journals that do copyedit do so in a fairly cursory fashion. When the edited manuscript is in final form, it goes into typesetting, and it is formatted to fit the appearance of the journal. All of the elements—title, author name, abstract, key words, references, and so forth—are set according to a standard template. Tables and illustrations are incorporated into the text. Mathematical equations and chemistry are set in the fonts dictated by the journal template. This typeset copy, which will also receive page numbers for a printed journal, is then proofread, and corrections are made for typographical errors. Tagging. If journal articles are to be distributed in an electronic medium, they must be coded with the information that is necessary to process them. The electronic codes used for typesetting have typically been proprietary; consequently, they were not suitable for enduring delivery and linkages across various platforms. The solution was to create generalized markup languages that are independent of processes and platforms. Many readers are familiar with HyperText Markup Language (HTML), which was designed to send tagged information over the Internet to different computers with different software (20). HTML transmits quickly and is simple enough to learn that beginners can use its software tools to set up their own Web sites. The disadvantage of HTML is that it has a weak format capacity. Because materials are displayed as each browser determines, there is no uniform delivery of design. Further, HTML is not sufficiently robust to support the creation of secondary and parallel materials. As a consequence, many publishers turn to Standard Generalized Markup Language (SGML), which is a metalanguage and parent to other languages such as HTML. For its journal, the publisher creates a document type definition (DTD), which defines the logical organization of the document and describes the contents, order, and names of each element. Tags then identify the start and end of each element. SGML is a powerful, robust language that is completely portable, so it supports potential derivative works. Although SGML was established as a standard (ISO 8879) in 1986, its use did not become widespread until the mid-1990s. Publishers still introduce SGML tagging at different points in the production process. Some tag the content at the editing stage so that SGML supports either print or electronic products, whereas others add the tags after they have completed all other processing. Subscribers expect greater functionality as well as increased speed of delivery from electronic journals; consequently, special processing is required. Although early journals were delivered on CD-ROM, the trend in the late 1990s is clearly toward Web-based products. The IEEE statement on transition to electronic distribution (21) noted that “ . . . posting material on World Wide Web servers in formats compatible with the most widelyused browsers is likely to be the best way to reach member and nonmember customers for the foreseeable future.” Many publishers are coding journal articles in SGML, then deriving the HTML files needed for online delivery and linkages. Because readers prefer the look and feel of material that replicates the printed page, publishers often put their articles in a Portable Document Format (PDF) file. This platform-independent format, produced by Adobe Systems as a derivative of its PostScript pages description language, allows printing in full PostScript and searching by keyword. Documents in PDF retain all original formats, fonts, and layouts of the print product. Although some publishers choose one format over another for a publication, Kasdorf (22) argued that both SGML and PDF are important for disseminating the same information in different ways. To speed up publication, publishers are beginning to post articles to online journals in advance of their print products. For example, the American Chemical Society (ACS) introduced a program they call As Soon As Publishable (ASAP), in which articles are posted as soon as they have gone through the review, editing, and author-proofing procedures (23). This speeded-up process that treats articles individually, instead of in
PROFESSIONAL JOURNAL ARTICLES
15
batches slated for a specific volume and issue, creates particular project management problems for a publisher, especially when the articles are also gathered in a print issue. Paper Journals. Technology has also changed print production processes quite dramatically. In the 1970s, graphic artists still cut up long galleys from the typesetter to paste up pages with rubber cement. The printer then photographed the pages, made plates, and printed the journal from the plates. In the 1980s, desktop publishing systems evolved, and publishers sent camera-ready copy on resin-coated paper, produced by computer, to the printer. In the 1990s, printers receive journal copy on disks, and even emails, from which they print the journal. High-speed presses and binders speed the process further. Journals then go to a mailhouse, where labels, produced from a file sent electronically from the publisher, are attached to the journal itself or the package in which it is placed.
Distribution Once the journal is completed—whether that means coded and ready for electronic transmission, mastered on CD-ROM, or printed and bound—it must be delivered to the subscribers and readers. The processes of delivery and payment vary significantly for print and electronic journals. Delivery of Print Journals. Printed journals must go into a mail delivery system. Most domestic US journals are sent via the United States Postal Service, either second class or third class. Consequently, delivery in the United States can take two to three weeks from the time the mailhouse delivers the journals to a postoffice. Journals destined for other countries travel via several different services, which generally combine airlift overseas with distribution in the in-country mail system. Libraries check journals in, catalog them, and put them in a reading room for current issues. Then when a volume is complete, the issues are bound and shelved in the periodical stacks. They may remain in stacks for an indefinite period, or they may be pulled for off-site storage, especially if the library purchases the journal in microform. Readers find the journals through the library cataloging system. Pricing of Print Journals. Print journals vary significantly inrice, depending on the discipline. However, the pricing scheme is generally the same. Institutional subscriptions are priced at a substantially higher rate on the theory that they serve a body of readers, rather than one or two. Societies often provide one or more journals to members as a part of their membership fee and offer a discount on other journals as well. In addition, many societies give students or new professionals an even greater discount. Nonmember individual subscriptions are priced between the member and the institutional rate. Delivery of Electronic Journals. Once the electronic journal is ready for delivery, subscribers obtain access either through a password or at a uniform resource locator (URL) or internet protocol standard (IPS) address. Libraries much prefer addresses so that they do not have to issue and track passwords for what can be thousands of users. Often companies and institutions invite users to establish profiles so that they can get alerts when an issue or article that might interest them is available. Although some journals are available on CD-ROMs, particularly collections of issues or journals, libraries prefer to obtain serial literature on the World Wide Web. Not only do CD-ROMs cause complications because of overloaded jukeboxes, they also tend to “walk” away easily. Pricing of Electronic Journals. Pricing has been extremely controversial, as publishers try to sort out business models that will enable them to assure stakeholders a return on the substantial investment they make in developing electronic products. Among the models are the following: • •
Pay for Search. The user pays for any access to the material, even for searching. Pay for View. The user searches free, but must pay for any content actually called up from a bibliographic citation.
16 •
PROFESSIONAL JOURNAL ARTICLES Licensing. The institution negotiates a site license, which may include print and electronic publications and specifies unlimited searching and printing for a set fee.
These models are still in flux, as publishers and libraries continue to discuss not only method of payment, but the level of payment. Walker (3) proposed that all subscription fees of any kind be eliminated and the production and archiving of journals be financed with author page charges. Given that most journals set optional page charges, based on the author’s ability to pay (generally a function of whether a grant will cover them), it is unlikely that this proposal will come to pass in the near future. One of the issues in pricing electronic journals is the definition of what constitutes a single site. Is it one entity? one campus? one building? The advent of institutional and corporate intranets has made the question much more complex. The enforcement of limited access is further complicated by multicompany, badgeless working environments. These issues, along with many other details of pricing, are still to be resolved.
Access and Archiving If primary journal articles are to serve as an unbroken record of verified research, potential readers and researchers must be assured that they will continue to exist in a usable form. They must also be certain that readers will have access to the literature and that they will have some roadmaps to help them locate what they need. Thus the issues of how we maintain access and how we preserve the array of information are critical ones. Abstracting and Indexing Services. Abstracting and indexing services provide the chief form of map to the professional literature. Scientific information is so voluminous in the late 1990s that searching for any specific content can be a daunting task. To do their work, researchers need help in sifting through the flood of information and summarizing precisely what they need. Abstracts provide sufficient information to allow researchers to evaluate the utility of articles for their purposes; the index terms attached to one abstract may provide clues to where additional information on the subject may be found. The following are among the many abstracting and indexing services that publish content related to electrical and electronics engineering: • • • • • • • • • •
Cambridge Scientific Abstracts Current Contents/Engineering, Computing, & Technology (ISI) Engineering Index Fluid Abstracts and FLUIDEX INSPEC Mathematical Reviews SciSearch (ISI) Shock and Vibration Digest Statistical Theory and Method Abstracts Telecommunications Abstracts
Although abstracting services online or on CD-ROM provide easy searching to narrow access to specific information, many readers still prefer to browse. Consequently, paper indexes still exist. Access to Single Articles. Instead of maintaining shelves of journals that may be read at some point (the just-in-case model), individuals as well as libraries are finding that purchasing articles as needed (the just-in-time model) is cost-effective. The options generally are interlibrary loan, commercial document delivery, and internal document delivery. Often libraries can acquire a copy of a journal article from another institution through interlibrary loan.
PROFESSIONAL JOURNAL ARTICLES
17
Faculty and students in many institutions can now search the UnCover database, which supplies document delivery by fax, and order journal articles. Having conducted studies that demonstrate substantial savings from canceling low-use journals, many libraries now permit their clientele to order articles from journals the libraries do not own without mediation from the librarian. Institutions are also using Bell & Howell Learning Information’s ProQuest. Some libraries have experimented with online public-access catalogs (OPACs). To create the catalogs, librarians scan tables of contents, pull out bibliographic information for the articles, and load the data into an online file. Adding SGML tags creates a searchable database. Then patrons can order articles from the internal database. Reprints and Preprints. Throughout the history of journal publishing, it has been customary that published authors have an obligation to share their work with others. That sharing may include original data or other materials, or it may mean giving someone a copy of the actual published work. Many scholars collect copies of articles as references for their own work or to keep up with advances in their field. Publishers give authors an opportunity to order reprints of their articles when they approve their edited manuscripts or page proofs. For many years it was customary for publishers to also provide a set number of reprints free; however, many cost-conscious publishers have abandoned the practice. Who pays for copies has varied over the years. In the eighteenth century authors, anxious to share their work as soon as possible, arranged for printers to make copies of their proofs so that the authors could distribute them in advance of publication. Knight [as reported in Wells (24)] suggested that these were the first preprints. Today authors often post their preprints to their Web server until it has been accepted for publication. Policies on posting copies once the article has been published varies from publisher to publisher. ACM permits authors as part of their retained rights to post their published article to their own servers and even make changes to it (25). IEEE (26) also permits authors or their companies to post IEEE-copyrighted material on their sites. Identifiers. Assuring access over time may become tenuous if we do not develop new means of identifying individual articles to augment the International Standard Serials Number (ISSN) that identifies specific periodicals. Regular users of the Internet, for example, have often discovered the frustration of changes in a URL. Payette 27 noted that URLs are addresses masquerading as identifiers and called for moving beyond the URL as an identifier. She noted that the Internet Engineering Task Force, which is the standard-setting body for Internet development, has created a uniform resource name (URN) that would be a “globally unique, persistent identifier.” The URN incorporates three main components: (1) a naming scheme; (2) a resolution system; (3) registries. Using the ISSN as a base, the Serials Industry Systems Advisory Committee (SISAC) created a code that identifies an issue of a periodical and an article. The serials item contribution identifier (SICI), which is a string of letters and numbers, includes several pieces of information: the issue date (chronology), volume and issue numbers (enumeration), location number, title, standard version number, and record validation character. The American Nationals Standards Institute approved it as a standard (Z39.56) in 1991. Three other systems are of note. Online Computer Library Center (OCLC) has created a permanent uniform resource locator (PURL) that uses an intermediary database to find the latest location. Using the same three components as the URN, the Handle System was developed by the Corporation for National Research Initiatives (CNRI) to create, administer, and resolve unique identifiers. In addition, the Association of American Publishers has created the digital object identifier (DOI) to deal with the challenges of selling and managing copyrights for digital publications. The DOI enables a publisher to identify fragments of an article, such as a table, a figure, or even a paragraph. Archiving. Archiving is a systematic approach to preserving materials so that they can be retrieved and used by readers no matter what conditions exist in the future. Storing journals in boxes in warehouses is one method of archiving. Unfortunately, we have discovered that acid paper turns brittle, heat and moisture destroy journals, and rodents or insects may make meals of them.
18
PROFESSIONAL JOURNAL ARTICLES
Paper is a fragile medium, yet it may be more lasting than many electronic products. In addition to all of the natural, organic, and chemical dangers posed to paper products, electronic media face other threats. Obsolescence is a major problem. If the hardware and software that deliver the electronic journal are no longer operational and available, the journal cannot be read. Electronic products can be altered so easily that dynamic data constitute another concern. Regardless of whether changes are made accidentally or on purpose, whether they are made with good intentions or fraudulent ones, the result is the same: the original material is lost. All the aspects of electronic publication that offer such exciting new opportunities for interactivity and linkages pose serious challenges to preserving an unchangeable archive. Although the problems have not been resolved, it is clear that materials must be preserved in a format that is independent of process and platform and that archivists must be prepared to migrate information from one medium to another in the future (28). Library Roles. Traditionally, libraries have been expected to ensure the preservation of published works. Because libraries fulfilled this expectation, many publishers did not even maintain their own archived set of published documents and destroyed their inventory of back issues after a short time. Most provided a complimentary subscription to UMI (now Bell & Howell Learning Information), so that microfilm versions were available. In the 1990s, however, the roles and expectations began to change. Faced with declining or static budgets, rising costs, storage problems, and the enormous difficulties of dealing with fragile documents, librarians sought help with the function of preservation. Librarians also noted that licensing agreements for electronic products generally vest ownership with the publisher, not the library. Consequently, the products are not theirs to preserve. That is not to say that librarians are not working on the issues of preservation and access. These are concerns for all libraries, and many major projects underway. For example, the Cornell University Library completed a study in 1997 funded by the National Endowment for the Humanities to test and evaluate using imaging to produce computer output microfilm that meets national standards for preservation (29). This project was a complement to a study at Yale University converting microfilmed brittle books to digital format. The Research Libraries Group also has a committee working on models and guidelines for archiving digital informatn (30). As the twentieth century comes to an end, publishers are joining librarians in working to ensure that scientific and technical information will continue to be available. Dementi (31) noted that items that are well used are the most likely to be archived and migrated to new media, so the risk of their being lost is minimized. The challenge will be to preserve important information that is used less frequently. Despite the problems faced in archiving materials, emerging technology offers hope for preserving the whole body of literature.
Conclusion For more than three centuries, the journal article has served as the chief means of scientific and professional communication. No other form of publishing delivers its level of originality, rigor, and substantiation. Changes in technology have resulted in changes in presentation and have facilitated delivery in different media; however, the basic attributes and process remain unchanged. There is a trend toward databases of information that incorporate journals from many different publishers and disciplines, as well as nonprimary material. This trend is likely to increase. Some thoughtful observers suggest that journal articles may no longer be the chief means of first communication of scientific information. The iterative process of drafting, revising, submitting, and revising again before an article reaches production takes more time than scientists may find acceptable for breaking news. Consequently, they may use conference presentations, trade publications, preprint servers, electronic discussion groups, and other means to share their discoveries initially. Having participated in an extensive dialogue, they will then write formal journal articles to assure that their work remains a part of the knowledge base.
PROFESSIONAL JOURNAL ARTICLES
19
Because of its rigorous process and the expectation that it will be preserved, the primary journal article is likely to remain the most relied-upon source of scientific information for some time. Note. This article contains many URLs for Web sites. These were accurate as of April 1, 1999, but they are subject to change. Most commonly, Web-site owners redesign their sites and create new internal addresses; consequently, using the base URL may bring the searcher to the home page with pointers to the specific content sought.
BIBLIOGRAPHY 1. C. T. Bishop How to Edit a Scientific Journal, Philadelphia: ISI Press, 1984. 2. A. Odlyzko On the road to electronic publishing, [Online], 1996. Available: http://www.sfu.ca/scom/odlyzko-96.html. 3. T. J. Walker Free internet access to traditional journals. Am. Sci., 86: 5, [Online], 1998. Available: http://www.sigmaxi.org/amsci/articles/98articles/Walker.html. 4. A. C. Schaffner The future of scientific journals: Lessons from the past. Inf. Technol. Libr. 13: 239–249, 1994. 5. T. Lieb Inactivity on interactivity, J. Electron. Publ., [Online], 1998. Available: http://www.umich.edu/jep/0303/lieb0303.html. 6. B. Meyers L. Beebe The Future of the Print Journal, Hanover, PA: Sheridan Press, 1999. 7. C. Osburn The place of the journal in the scholarly communications system, Libr. Resour. Tech. Serv., 28: 320, 1984. 8. L. Guernsey V. Kiernan Journals differ on whether to publish articles that have appeared on the web, The Chronicle of Higher Education, Information Technology. [Online], 1998. Available: http://chronicle.com 9. C. Tenopir D. W. King Trends in scientific scholarly journal publishing in the United States, J. Scholarly Pub., 28: 135–170, 1997. 10. R. A. Day How to Write and Publish a Scientific Paper, 3rd ed., New York: Oryx Press, 1988. 11. IEEE, Information for authors, [Online], 1998. Available: http://www.ieee.org/pubs/authors. 12. R. K. Raney Into a glass darkly, J. Electron. Publ., [Online] 1998. Available: http://www.press.umich.edu/jep. 13. J. Grycz J. (ed.) Professional and Scholarly Publishing in the Digital Age, New York: Association of American Publishers, Professional and Scholarly Publishing Division, 1997, p. 73. 14. Digital Millennium Copyright Act of 1998, P.L. No. 105-304, 112 Stat. 2860. 15. IEEE, Copyright transfer form, [Online], 1998. Available: http://www.ieee.org/copyright. 16. National Federation of Abstracting and Information Services (NFAIS), The rights and responsibilities of content creators, providers, and users, [Online], 1997. Available: http://www.pa.utulsa.edu/nfais/whitepaper. 17. Copyright Act of 1976, P.L. No. 94-553, 90 Stat. 2541. 18. L. Beebe Professional Writing for the Human Services, Washington, DC: NASW Press, 1993, p. 161. 19. A. C. Miller S. L. Serzan Criteria for identifying a refereed journal, J. Higher Educ., 55: 673–699, 1984. 20. T. Horrocks Design issues on the World Wide Web, Learned Publ., 9: 67–71, 1996. 21. IEEE, status and vision: Transition of IEEE publications to electronic dissemination, [Online], 1996. Available: http://www.ieee.org/pubs/visstat. 22. W. Kasdorf SGML and PDF: Why we need both, J. Electron. Publ. [Online], June 1998. Available: http://www. press.umich.edu/jep/03-04/Kasdorf.html. 23. S. L. Wilkinson Electronic publishing takes journals into a new realm, Chem. Eng. News, 76: 20, 1998. 24. E. B. Wells Reprints, in Encyclopedia of Library and Information Science, New York: Deer, 1986, Vol. 40, Suppl. 5. 25. ACM, ACM interim copyright policy, [Online], 1995. Available: at http://www.acm.org/pubs/copyright policy/#Authors. 26. IEEE, Copyrights, trademarks, and permissions, [Online], 1998. Available: http://www.ieee.org/copyright/policies/html. 27. S. Payette Persistent identifiers on the digital terrain, RLG DigiNews, 2 (2), [Online], 1998. Available: at http://www.rlg.org/preserv/diginews. 28. B. Meyers L. Beebe Archiving from a Publisher’s Point of View, Hanover, PA: Sheridan Press, 1997. 29. A. R. Kenney The Cornell digital to microfilm conversion project: Final report to NEH, RLG DigiNews, 1: 2, [Online], August 15, 1997. Available: http://www.rlg.org/preserv/diginews. 30. RLG Preservation Program, RLG preservation working group on digital archiving, [Online], January 14, 1998. Available: http://www2.rlg.org/preserv.
20
PROFESSIONAL JOURNAL ARTICLES
31. M. Dementi Access and archiving as a new paradigm, J. Electron. Publ, 3, [Online], 1998. Available: http://www.press.umich.edu/jep.
READING LIST R. W. Burchfield (ed.), The New Fowler’s Modern English Usage, 3rd ed., Oxford, UK: Oxford University Press, 1996. S. P. Carter Writing for Your Peers: The Primary Journal Paper, New York: Praeger, 1987. H. B. Michaelson How to Write and Publish Engineering Papers and Reports, 3rd ed., Phoenix; AZ: Oryx Press, 1990. W. Strunk, Jr. E. B. White The Elements of Style, 3rd ed., New York: Macmillan, 1979. University of Chicago Press, The Chicago Manual of Style, 14th ed., Chicago: University of Chicago Press, 1993.
LINDA BEEBE Parachute Publishing Services
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELECTRICAL%...0ENGINEERING/49.%20Professional%20Communications/W5616.htm
}{{}}
●
HOME ●
ABOUT US ●
CONTACT US ●
HELP
Home / Engineering / Electrical and Electronics Engineering
Wiley Encyclopedia of Electrical and Electronics Engineering Telecommunication Methods Standard Article S. A. McClellan1, K. F. Conner1, David A. Conner1 1University of Alabama at Birmingham Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved. DOI: 10.1002/047134608X.W5616.pub2 Article Online Posting Date: December 27, 1999 Abstract | Full Text: HTML PDF (940K)
●
●
● ●
Recommend to Your Librarian Save title to My Profile Email this page Print this page
Browse this title ●
Search this title Enter words or phrases
Abstract The sections in this article are Introduction - The Environment Mainstream Information Exchange The Internet and the World-Wide Web Conventional Telephony Modern Conferencing Methods Wireless Telephony — The Next Generation Summary About Wiley InterScience | About Wiley | Privacy | Terms & Conditions Copyright © 1999-2008John Wiley & Sons, Inc. All Rights Reserved.
file:///N|/000000/0WILEY%20ENCYCLOPEDIA%20OF%20ELEC...ERING/49.%20Professional%20Communications/W5616.htm15.06.2008 20:27:03
❍
❍ ❍
Advanced Product Search Search All Content Acronym Finder
TELECOMMUNICATION METHODS
INTRODUCTION - THE ENVIRONMENT In the “Age of Information”, the technologies which enable communication between humans have assumed an almost surrealistic profile. This environment may be due, in part, to the tremendous complexity which enshrouds many specifications for digital communications between machines. It is this unforgiving complexity which demands an unprecedented level of worldwide collaboration between technologists, equipment manufacturers, and researchers to develop open standards, protocols, and formats so that machines can “communicate”, thus allowing humans to “communicate.” It is this complexity which has also confounded the business and marketing of technology with an everdeepening stream of acronyms, cryptic terminology, and system architectures. The purpose of this chapter is to provide an overview of several of the major communication technologies which, having been formally standardized, having become “de-facto” standards via widespread popularity, or having been accepted simply as “part of life”, have emerged as powerful and necessary means of communication between humans. One of the most mainstream examples of communication technology is the common telephone. Telecommunication technology and some of the concepts, acronyms, and architectures which comprise this extremely complex subculture are discussed in several of the sections that follow. From Plain Old Telephone Service (POTS) and Facsimile transmission (FAX), to the emerging “Converged Communications” environment, these technologies and their rapid deployment play a central role in all “high tech” communication and in the development of the “information superhighway.” A brief discussion is presented of (a) the salient points of standards and system architectures for private telecommunication networks, (b) interfaces with public providers, and (c) basic concepts related to voice, video, and data communications via circuit-switched or packet-switched networks. A second tremendously important technology in the expanding scope of communication is the “language of the Internet.” The orderly, organized, and open standardization of the suite of protocols which are the building blocks of the Internet phenomenon is briefly discussed, and several of the primary protocols are highlighted which allow for seamless communication between humans using computers. The purpose of this discussion is not to provide the reader with a “developer’s understanding” of TCP/IP or to delve deeply into the structure of particular protocols. Rather, an attempt is made to focus attention on several user-level protocols which have had significant impact on the rapid development of human-computerhuman communication, enabled tremendous functionality or new innovation, or have matured to a point where they permit “high-level” classification for the benefit of nontechnical users. Included in these classifications are electronic mail (text and non-text formats, system concepts, and underlying technologies); the Hyper-Text Transport
Protocol (HTTP) and its accompanying user-interface via the Hyper Text Markup Language (HTML), and the WorldWide Web (WWW). MAINSTREAM INFORMATION EXCHANGE Electronic Mail (email) and facsimile transmission (fax) are increasingly becoming the medium of choice for efficient communication of business and engineering information, as well as personal interactions. Although “softcopy” may never completely replace “hardcopy”, the convenience, rapidity, and repeatability of electronic memoranda, facsimile and electronic document interchange, and broadcast messaging have re-structured the way humans communicate. In addition, the Internet phenomenon of “the web” has revolutionized storage, dissemination, and access for textual, graphical, and pictorial information. With the continuing evolution of “network computing” and increasing interactivity of web-pages, the once-serene research environment of the Internet has given-way to mass commerce and hypertext chaos. Electronic Mail Numerous Internet Requests for Comment (RFC’s) document the format of email documents, attachments, and transmission protocol syntax. These documents are archived along with other Internet standards by the Network Information Center (NIC) at http://ds.internic.net or by the Internet Engineering Task Force (IETF) at http://www.ietf.org. Of the many RFC’s, there are primarily two technologies used to enable the transport, storage, and retrieval of electronic mail. As is the case with most Internet-class protocols, these technologies are divided into “client” and “server” functions. On the “server” side, the Simple Mail Transport Protocol (SMTP) is a welldeveloped member of the Internet Protocol (IP) suite which has long been responsible for the transport and storage of electronic messages. On the “client” side, where varied user interfaces and capabilities are required for retrieving messages, a single standard isn’t as clearly delineated. However, the decentralization of computing resources from “mainframe” to “workstation” environments has produced a need for much more sophisticated, network-aware interfaces between users, workstations, and mail repositories. Bulk-Mail Transport The usefulness of SMTP lies in its capability to transport electronic mail between “bulk storage” computers and then allow distribution of the mail to appropriate users. SMTP is defined in RFC 821 and uses a request-response protocol based around the “usual” IP requirement that all protocol content be clear-text, human-understandable strings formatted according to the American Standard Code for Information Interchange (ASCII). Responses to the commands are also text lines, starting with a 3-digit code to indicate the result of the command and finishing with arbitrary text for informational purposes. Although this “open” format for Internet protocols is tremendously convenient for development, etc., it has been the source of innumerable security
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright © 2007 John Wiley & Sons, Inc.
2
Telecommunication Methods
breaches in distributed computer systems. For example, it is well-known that SMTP servers traditionally “listen” on TCP port 25 for incoming connections. It is a fairly simple matter for an unscrupulous user to connect with the server and use SMTP-commands to exploit security weaknesses in the implementation of the server software. It is also straightforward to “snoop” the clear-text email transactions entering/leaving TCP port 25. Simple electronic messages consist of separate functional parts (“envelope”, “header”, “body“), much like traditional postal communications. RFC 822 defines the basic header syntax for Internet email as a series of lines with field: data format. In keeping with common-sense, usual header fields in email transactions are From, To, Cc, Reply-To, Subject, and Date. The data contained in each header field must comply with syntax particular to that field. As a message traverses through a series of SMTP servers (or “Message Transfer Agents”), each server adds a Received field on the message with time/date and server identifier to aid in retracing the message’s path (in the same fashion that documents are physically stamped to indicate handling) (1). For messages which contain non-ASCII information (such as files created with word processors, spreadsheet programs, images, audio/video, etc.) the “usual” approach to handling clear-text email isn’t applicable because the numeric values of the data bytes may not correspond to ASCII character codes. In these cases, the Multimedia Internet Mail Extensions (MIME) defined in RFC 1521 and RFC 1522 are used to augment the basic header information of RFC 822 and slightly restructure the message body. The MIME RFC’s define a flexible, extensible, and reversible procedure for translating binary data into unique sequences of ASCII codes which comply with RFC 822. These “MIME attachments” can be included in email messages and easily transported by SMTP. MIME encoding uses a technique known as “base64” where groups of three adjacent 8-bit non-ASCII bytes are broken into four pieces, each having 6-bits. These 6-bit units are then represented as a valid 7-bit ASCII character compatible with RFC 822. The data resulting from a MIME-encoding procedure consists only of ASCII codes for letters, digits, and the symbols +, /, and =. The MIME extensions of RFC 1521 and 1522 define additional message header fields such as MIME-Version, Content-Type, and ContentTransfer-Encoding to specify the multimedia characteristics and standard representations for messages which have “complex” (i.e. non-ASCII) components. Any message which does not have a MIME-Version field in the header is assumed to be composed of clear-text only. The ContentType header field is an extensible definition of basic document types, each of which may have multiple sub-types. The seven document types and subtypes defined in RFC 1521 are shown in Table 1. The Content-TransferEncoding header field is used to declare which type of binary-to-ASCII “packaging” was used to encode the binary data for transmission via SMTP. The simple ASCIIformatted “header” plus “body” plus “attachment” architecture of email is an important building block for transport of other application-level Internet messages. These important payloads span several areas, from the hypertext of the
WWW to the basic signaling messages proposed for “next generation” telecommunications. User Access to Mail Electronic messages are separated by an SMTP server (often called a Message Transport System, or MTS) based on user-specific criteria. In a multi-user environment, this criteria is often the “username” of the person to whom a message is addressed. Electronic mail destined for a specific user is collected in that user’s “mailbox” for the user to access at some later time. The modern decentralization of computing resources based on Internet Protocols (among others) and highly capable (portable!) workstations has made it impractical to maintain complete MTS services on personal computers. As a result, the methods by which end-users access their electronic “mailboxes” have undergone a dramatic paradigm-shift. There are three general modes for mailbox access as discussed in RFC 1733: “Online”, “Offline”, and “Disconnected.” The primary difference among these modes is the location of the electronic messages after the user has accessed (or “read”) them. “Online” and “Disconnected” access modes maintain a master copy of messages in a mailbox on the MTS and differ in the use of a local “cache” copy of the messages (Disconnected mode) versus true remote manipulation (Online mode). “Offline” access entails fetching several messages from a server to some local storage, then deleting the fetched messages from the server. Of these access modes, two specific approaches, standardized in multiple RFC’s, are dominant (2). POP mail The Post-Office Protocol (POP) is a very popular realization of an “Offline” access paradigm. POP is an applicationspecific message access protocol which is intended to move mail in a store-and-forward fashion from a repository (mailbox or server) to a single destination. Originally, the POP protocol was useful in the on-demand delivery of electronic messages from an MTS to a personal computer where the messages were read, archived, edited, deleted, and manipulated locally by the end-user. After several revisions, the current POP specification (POP3) is documented in RFC 1939 and provides for user authentication as well as some remote mailbox manipulation. Note that passwords are transmitted in clear-text to correspond with the format of Internet text messages defined in RFC 822. This is a significant security hazard, and has been addressed in several ways, most notably through the use of a “secure socket” approach (RFC 2595). POP3 sessions are TCP connections which progress through several states. These states are described by the usual command/response pairs of text-based Internet protocols. After initiation and establishment of the TCP session between client and server, the client must successfully proceed through the AUTHORIZATION state where username/password are validated. Properly authorized clients proceed into the TRANSACTION state where any combination of POP3 transaction commands may be issued to manipulate the user’s mailbox, retrieve messages, and so on. When the POP3 client terminates the transaction sequence, the session enters the
Telecommunication Methods
UPDATE state where the server finalizes mailbox changes, deletes retrieved messages from the mailbox, and closes the TCP connection. This flow of events is illustrated in Fig. 1. Several implementations of POP3 clients are available for personal computers. One of the most popular client interfaces is Eudora, which is developed and maintained at QUALCOMM, Inc. Eudora, like other POP3 clients, has user-interface features which implement convenient retrieval, local processing, manipulation, and storage of electronic messages. Some features which are not specified in Internet mail access protocols, but are important in managing large quantities of electronic mail, include hierarchical folder structures for archiving messages, automatic filtering and sorting of incoming messages, the ability to
3
correctly and automatically handle (decode) MIME extensions, and seamless integration with the operating system of the local (personal) computer.
Internet Message Access Protocol (IMAP) The Internet Message Access Protocol, described in RFC 2060, is a functional superset of POP which allows for much more sophisticated interactions between user (client) and mailbox (server). Additionally, IMAP provides a capability for the user’s client software to retrieve the structure or “envelope” of messages without the need for downloading the entire message body. This approach is especially useful in reducing transmission time for users with
4
Telecommunication Methods
Telecommunication Methods
5
Table 9. Common speech compression algorithms Std. G.711 G.726 G.728 G.729 GSM 06.10 GSM 06.60 GSM 06.90 IS-733 IS-893 IS-127 RFC-3951
Pseudonym µ-law PCM ADPCM LD-CELP CS-ACELP GSM-FR GSM-EFR GSM-AMR QCELP13 SMV EVRC iLBC
Rate (kbps) 64 16/24/32/40 16 8/11.8 13 12.2 <12.2 <13 <8.5 <9.6 13.3/15.2
Main usage long distance telephony videoconferencing videoconferencing wireless & Internet telephony TDMA digital cellular TDMA digital cellular TDMA digital cellular CDMA digital cellular cdma2000 wireless cdma2000 wireless Internet telephony
low-bandwidth network connections. During the parsing process on the server, user-specified criteria may be used to establish “selective fetching” of messages with certain body content, header content, and so on. This approach has increased functionality over purely “Offline” access protocols, such as POP, and provides a messaging technology which is more than adequate for contemporary needs. In IMAP-compliant client/server interactions, the server or MTS is implicitly the primary repository of electronic messages. Client software may retrieve/delete messages and manipulate the server mailboxes as if they were lo-
Complexity Low Moderate High High Moderate Moderate Moderate High High High Moderate
cal, but the server always maintains the mailboxes. In short, the IMAP paradigm provides robust support for “Online” and “Disconnected” access to the mail repository, POP-equivalent support for “Offline” access, and an efficient, application-specific interface for message, mailbox, and client/server interaction. IMAP also provides the capability for an “Offline” user’s message cache to resynchronize with the server (2). An interesting example of “Online” electronic messaging is the Short-Message System (SMS) of wireless telephony. These “email-like” services were originally designed
6
Telecommunication Methods
to carry low volumes of data between users of wireless telephones. In many wireless networks, however, the SMS service has become so popular that messaging traffic threatens to choke the operator’s control network. The exponentially increasing popularity of SMS illustrates a parallel of “convergence” between the paradigms of packet-based, Internet-style messaging and conventional (wireless) telephony. The issue that drives these disparate technologies to a form of “convergence” is the fundamental requirement for efficient communication between the users. Courtesies and Other Etiquette The etiquette of communicating electronically is often given the name “netiquette.” Since electronic communications involve the sending and receiving of information without the opportunity to experience the verbal and visual cues available in face-to-face interactions, the recipient is unable to establish the tone of voice, facial expression, body
language, or gestures of the sender. Therefore, care must be taken to ensure that the printed text conveys the exact emotion desired. Since only ASCII-strings are often transmitted, features such as font changes, italics, boldface type, and underlining are unavailable. Several conventions have evolved to counter this problem, and some typically accepted examples are shown in Table 2. Some email software allows composition of emails formatted with “rich text.” In these cases, the sender has the privilege of using different font styles, font sizes, and colors as well as text enhancements such as bold face type, underlining, italics, and complex spacings. In this environment changes in font size can be used to indicate level of importance, and color changes or indention can be used to emphasize various points in the text. However, the recipient’s experience with the transmitted correspondence may differ significantly from the sender’s intentions.
Telecommunication Methods
7
8
Telecommunication Methods
Recall that the entire email message must comply with RFC 822. This imposes a requirement for a plaintext message body. So, email messages composed with “rich text” formatting extensions must be MIME-encoded and transmitted as an attachment to the plain-text message. Unfortunately, the process of extracting a plain-text message body from the original “rich text” message is not prescribed by any standards, and may not have a particularly appealing outcome. Additionally, the recipient’s email software may not be configured to handle such messages properly, or interpret the “rich text” attachment correctly. As a result, “what you see” may not be “what they get.“. Care should be taken when authoring email. Email is often forwarded to others; and, once text has been transmitted, there is a possibility that the text will continue to circulate for a long time. So, effort should be expended to ensure that the desired message has been conveyed. Additionally, since the reader will form an impression of the
sender based upon the structure and content of the message, care should be taken by the sender to present himself “in the best light”. The simple rules listed in Table 3 will aid in accomplishing this goal. Since the Internet is international, the meaning of geographic/regional and national references can be lost on the reader. Consider the following examples.
A person living in the western part of the United States should not use the phrase “we in the west” because “west” is a relative term and can convey a different image to a person in Europe. The phrase “we in the western portion of the United States” would be more appropriate. Terms or phrases that possess a national context, e.g. “first amendment,” “un-American,“ and “American values” should be avoided since they have no
Telecommunication Methods
9
Figure 1. State diagram of POP3 session.
meaning to most residents of other countries.
Finally, netiquette does allow the use of popular abbreviations including FYI (for your information), BTW (by the way), and ASAP (as soon as possible).
Facsimile Transmission Facsimile transmissions, or “faxes,” have developed into a commonplace means of communication between humans for purposes ranging from transmitting technical specifications to ordering pizza. Fax machines with telephone handsets and sophisticated user interfaces can be purchased as stand-alone devices, and “multifunction” office machines with fax, phone, printer, scanner, and photocopying capability are also commonplace. Additionally, many modems for personal computers have support to transmit/receive faxes in addition to data capabilities. All of these devices are based on international telephony standards (specifications) set forth by the International Telecommunications Union, Telecommunications Standardization Sector (ITU-T, formerly the CCITT). Ther are standards for control of modem functions, interfaces between applications and modem hardware, the modulation techniques used for certain data rates, the procedure used by modems which are communicating fax data, and the types of “massaging” that can be performed on the data to reduce transmission time. As with most telecommunications functions, the domain of faxes is awash in a sea of acronyms and cryptic mnemonics. The purpose of this section is to give a general overview of the relationship between the most significant acronyms, the functions, and the application-level capabilities of modems, faxes, and computer interfaces.
General Background In its grossest form, the communication of a fax entails scanning a document in raster fashion with a bright spot of light and classifying the reflected light as either white or black. Since most documents have a large amount of white space, the data which results from scanning can be compressed in a simple, lossless fashion and is usually transmitted using sophisticated, efficient modulation schemes. Standards for facsimile transmission were initially proposed by the American National Standards Institute (ANSI), the Electronics Industry Association (EIA) and the Telecommunications Industry Association (TIA). The standard, called EIA/TIA-614 (now obsolete) and later subsumed in ITU-T Recommendation T.2, is also called “Group I” fax and describes the transmission of a page of information using very simplistic modulation schemes and poor resolution. The “Group II” fax standard (ITU-T Recommendation T.3, almost obsolete) describes an implementation of a more sophisticated modulation technique to allow faster transmission of a page (about three minutes at a resolution similar to Group I). One of the current standards for fax transmission is described in ITU-T Recommendation T.30 as “Group III” fax. Group III fax machines are currently the most commonly used fax technology and can transmit a page in about one minute.
Group III Fax The ITU-T standard for Group III fax (T.30) specifically covers the procedure for initiating and managing a fax transfer. Supporting ITU-T Recommendations (i.e. T.4) cover the page size, resolution, transmission time, image format, and supported compression schemes. According to T.30, a Group III fax call proceeds through five states: Call Set-Up, Pre-
10
Telecommunication Methods
Figure 2. Group III fax state diagram.
Message Procedure, Message Transmission, Post-Message Procedure, and Call Release. The Set-Up and Release states are typical of protocols which place data calls into the public telephone network. The Set-Up state specifies tones to be emitted by the calling and answering machines as part of the analog “handshake” procedure prior to transferring data, and the Release state essentially hangs up the phone. This procedure is illustrated in Fig. 2. Based on the “ground rules” established during Set-Up, the two fax machines enter the Pre-Message Proce-dure state and begin exchanging control information to determine compatible capabilities, maximum reliable signalling rate for the channel, and so on. During the Pre-Message Procedure, the destination machine explains to the sender what capabilities it has (i.e. resolution, page size, receiving speed, etc.); and, based on this information, the sending machine sets the capabilities which will be in effect for the ensuing page of scanned data. The control information is exchanged via “usual” data-modem modulation technologies and, consequently, operates at fairly low data rates (like 2400 bits/s). During the Message Transmission state, the scanned image of a page of the document is actually transmitted. Transmission entails signaling over the channel at the baud rate determined during “training” and using transmission parameters which are compatible with the capabilities of both sending and receiving fax machines. The “standard” scanning resolution for faxes is 1728 pixels per horizontal 215 millimeter (mm) scan line with 3.85 scan lines per vertical millimeter. This format translates to approximately 98 dots per vertical inch by 204 dots per horizontal inch (dpi). An optional “fine” resolution doubles the number of scan lines to achieve a resolution of 196 × 204 (horizontal × vertical) dpi. To achieve efficient transmission of this volume of data, synchronous, half-duplex modulation techniques (such as ITU-T V.29 at 9.6 kbps) are used which are not compatible with the asynchronous,
full-duplex modulation techniques of “usual” data-modems (such as ITU-T V.32 at 9.6 kbps). Consequently, a personal computer data-modem without specific additional support for fax will not be useable for fax transmission. After Message Transmission is complete, the Post-Message Procedure re-establishes low-speed communication between the devices for control purposes. This approach using high-speed, synchronous, half-duplex transmission of scanned pages sandwiched between low-speed control states allows for sending multiple-page documents, independent page verification, re-transmission, etc., as well as “re-training” if necessary before further communication. When the document is completely transferred, the session terminates with the Call Release state (3). Courtesies and Other Etiquette Fax transmission normally involves copies of alreadytyped information, e.g. a letter, a memorandum, an article, a chart, a table, or a combination of these documents. Fax etiquette (“fetiquette”) requires that the first page being transmitted contain the items listed in Table 4. Clearly, it is preferable to number pages (including the cover page) in consecutive order to assist the recipient in identifying exact pages that are missing; however, certain communications do not lend to this feature. THE INTERNET AND THE WORLD-WIDE WEB The World-Wide Web, or simply, “the web” is essentially a point-and-click, graphical user interface (GUI) for the Internet. This GUI is built on a client/server paradigm like the majority of application-level Internet protocols. However, because of its inherent accessibility, web technology evolves at an extremely rapid pace which includes multiple, competing standards for interactivity between user and client/server software, incremental distribution of exe-
Telecommunication Methods
cutable (interpreted or pre-compiled) code, transaction security, and vendor-specific enhancements. The purpose of this section is to highlight briefly the basic structure of web interactions, including some of the standard enhancements which provide useful, extra functionality. The web began at the European Center for Nuclear Research (CERN) where a large collection of researchers needed to easily share a collection of largely graphical documents. Instead of replicating these documents for all researchers, the CERN solution was to use “hyperlinks” to point to the distributed documents [1]. In this proposal for a network-based, hyperlinked “web” of documents, and supported by the maturity of the Internet protocol suite, the CERN physicists created the electronic equivalent of an atomic chain-reaction. The basic result of this technological explosion is that when a person using a freely-available “web browser” (or client software) follows a hyperlink from one document to the next, the browser uses a standardized protocol to (1) retrieve the document from the remote web-server which is indicated by the hyperlink, and (2) format and display the document, which may have additional hyperlinks, using the browser’s built-in graphical interpreter. Web Technologies Interactions between browser/client software and webserver software which enable the retrieval of hyperlinked documents are carried out according to the rules of the Hypertext Transport Protocol (HTTP). In HTTP, browsers “request” objects such as documents (or files), and servers “respond” with the requested object or with some error condition. All HTTP requests are transferred between computers in clear-text format and followed by one RFC 822 MIME-like response. The most important HTTP requests are listed in Table 5. Of these, the GET request is likely the most common since it is the mechanism by which “web pages” are retrieved for display on the browser’s screen. In the table, an “object” can be a properly-formatted web page, a file, or some other information to be used in processing web pages or web-based data entry. When a browser “follows” a hyperlink to connect to a web server, it actually issues a GET request for the item indicated by the hyperlink. A “full request” is composed of possibly several lines including a GET command (or another command), the hyperlink reference to the object, and the HTTP version number which is supported by the browser. The request terminates with a blank line, as follows. GET www.someplace.com/documents/file.xxx HTTP/1.0 (blank line) If the request is successful, the server responds with an HTTP status line followed by an RFC 822-format message with MIME-compliant extensions, some of which are specific to hypertext documents. The message is separated into header and body sections which are divided by a single blank line. The header section is a series of MIME field: data pairs which describe the body, and the body is typically a complex document created using some version of the HyperText Markup Language (HTML) and having multiple MIME-encoded embedded parts (1, 4). A typical response to a successful GET request might be as follows, where the payload has the “header” plus “body” plus “at-
11
tachment” format as in SMTP’s email payloads: HTTP/1.0 200 Document follows MIME-Version: 1.0 Server: CERN/3.0 Content-Type: text/html Content-Length: 8247 (blank line) (document body) (document body) (document body)
The Future of HTTP Interestingly, the HTTP version 1.0 specification is available as an information-only document (RFC 1945). HTTP/1.0 is deployed by most existing Web applications even though RFC 1945 does not specify an Internet standard of any kind (1). HTTP/1.0 also has serious problems regarding performance and scalability which are well-documented. As a result, the HTTP/1.1 specification was developed and released to the Internet community as a draft standard under RFCs 2616 & 2817. HTTP/1.1 features capabilities for cache control, including an Age header for expiration of cached pages and a CacheControl directive for restrictions on what can be cached by intermediate servers and the “staleness” of information the browser will accept. Additionally, HTTP/1.1 promotes efficient data access by allowing for transmission of portions of MIME objects (including “chunked encoding” for large/unknown datastreams). Associated Internet standardization which has influenced the “next generation” HTTP includes Digest Authentication (RFC 2617) where cryptographic hashes replace the existing Basic Authentication and clear-text passwords of HTTP for secured sites, and State Management (RFC 2965), a standardized approach which allows a server to maintain the relationship between subsequent browser requests. A non-standardized approach to state maintenance called “Cookies” was developed by Netscape Communications Corp., one of the leaders in the deployment of web software. The standard approach of RFC 2965 is compatible with (and bears a great resemblance to) Netscape’s Cookies. Security Issues As mentioned previously, the security of IP-based protocols is often very lax because of a tendency to transmit everything in ASCI-encoded text strings (“clear text“). The dependence of modern commerce and other transactions on web-based interfaces and Internet transport isn’t wellsupported by such unprotected data structures. As a result, the IETF community has devised implicit “socket-layer” encryption mechanisms for such transactions. The “secure socket layer” (SSL) protocol was the original approach to implicit (passive) security mechanisms for Internet-based transactions. SSL was developed by Netscape Communications to to provide privacy and data integrity between two communicating applications. SSL runs between TCP/IP and higher-layer protocols such as HTTP or IMAP, and uses TCP/IP on behalf of the higherlayer protocols. The Transport Layer Security Protocol
12
Telecommunication Methods
(TLS), defined in RFC 2246, is the IETF-standardized version of SSL, and is based on SSL version 3. The complete TLS protocol has two layers: the TLS Record Protocol encapsulates higher layer protocols, including the TLS Handshake Protocol. By using the TLS Handshake, communicating processes can authenticate each other, negotiate an encryption algorithm, and exchange cryptographic keys before the application protocol begins transmission. The independence between TLS and the application protocol is a significant strength of such implicit security techniques. Higher-layer protocols and applications layer transparently on top of TLS. For example, RFC 2487 & 2595 describe the use of SMTP, IMAP, and POP with TLS, and RFC 2817 & 2818 describe the use of HTTP over TLS. The HyperText Markup Language (HTML) In addition to the HTTP protocol which structures the interaction between computers on the WWW, there is a basic protocol or language by which information content is conveyed between the humans who are using the computers. This language is the HyperText Markup Language, or HTML, which is actually a specific application of the Standard Generalized Markup Language (SGML) of ISO standard 8879 (1). Statically authored web-documents are composed using HTML syntax and placed under the control of server software at a website. When a human’s browser software follows a hyperlink to an existing object, the server sends an affirmative response to the browser (using HTTP as described earlier) and includes an RFC 822/MIME formatted document. The body of the server’s response document is the “web page”, which contains particular graphical and formatting commands to be interpreted and used by the browser in displaying the content of the page. The formatting commands are expressed in HTML syntax so that the browser software can adjust the onscreen appearance of the document for different browsing conditions (monitor resolutions, etc.). HTML formatting commands embedded in the document, also called tags, are enclosed in anglebrackets and usually come in pairs to indicate the beginning (i.e.
) and end (i.e. ) of a particular section of formatting (4, 5). Some tags act without a mate, and some tags take arguments (or, named parameters). Examples of tags that denote fundamental sections of HTML documents are , <TITLE>, and . More details about the structure and function of HTML tags can be found in the references and at multiple websites devoted to the subject of web publishing. Forms and Tables As the popularity of the web increased, the limited capabilities of HTML version 1.0 were soon outgrown. As a result, standardization work on HTML versions 2.0 and 3.0 enabled many more capabilities for clever web-page content, including the ability for browsers to render tabular data, use subsections of images as hyperlinks, and accept userentered data into a form for submission and processing by special software at the web server. The
tags (introduced formally in HTML 3.0, see RFC 1942) and associated formatting rules allow a web page to contain cells of data orga-
nized in rows and columns. The cells can contain static text as well as any valid HTML tag including images, anchors, and so on. Several new formatting tags are defined to accommodate the special cases of tabular data, including a
tag, cell effects such as horizontal/vertical alignment, justification, borders, etc. and tags to distinguish headings and/or data for particular cell locations. The tags (introduced formally in HTML 2.0, see RFC 1866) and associated formatting rules allow a web page to contain special fields to capture data entered by the user of the browser and transport the data back to the web server for processing (1–5). In a general sense, the HTML form (in concert with the HTTP POST function) is a mechanism for declaring variables for a hyperlinked subroutine, interactively capturing data for the variables, pushing the data values on the “stack”, and invoking the procedure on the computer which hosts the webserver. The arguments of the