Innovative Technology for Computer Professionals
June 2005
Frames of Reference, p. 9 Security Technologies Go Phishing, p. 18
h t t p : / / w w w. c o m p u t e r. o r g
Programmable Matter, p. 99
Innovative Technology for Computer Professionals
June 2005,Volume 38, Number 6
PERSPECTIVES 25
The Case for Technology in Developing Regions Eric Brewer, Michael Demmer, Bowei Du, Melissa Ho, Matthew Kam, Sergiu Nedevschi, Joyojeet Pal, Rabin Patra, Sonesh Surana, and Kevin Fall In many cases, “First World” technology has been a poor fit in developing regions, and there is a need for research that could have a beneficial economic impact and contribute to an improved quality of life by providing access to technology in these areas of the world.
COMPUTING PRACTICES 39
Reflection and Abstraction in Learning Software Engineering’s Human Aspects Orit Hazzan and James E. Tomayko Intertwining reflective and abstract modes of thinking in a course that focuses on software engineering’s human aspects can increase students’ awareness of the discipline’s richness and complexity.
C O V E R F E AT U R E S 46
Project Numina: Enhancing Student Learning with Handheld Computers Barbara P. Heath, Russell L. Herman, Gabriel G. Lugo, James H. Reeves, Ronald J. Vetter, and Charles R. Ward Project Numina researchers are developing a mobile learning environment that fosters collaboration among students and between students and instructors.
57 Cover design and artwork by Dirk Hagner
Gerald Friedland and Karl Pauls Research on the use of computational equipment in education must focus on combining technologies that target the classroom to allow specifying “what” rather than “how” tasks should be done.
ABOUT THIS ISSUE
omputational devices such as calculators and computers have long been a fixture in the modern classroom—and new, inexpensive wireless devices will only continue this trend. In this issue, we look at innovative applications of technology in the classroom, including using handheld devices to create a collaborative mobile learning environment. Other topics include a framework for the costeffective delivery of digital content, an approach that combines technologies to architect multimedia environments for teaching, and the use of a small robotic insect to teach students the concepts of robotics and embedded systems.
C
Architecting Multimedia Environments for Teaching
66
Toward an Electronic Marketplace for Higher Education Magda Mourad, Gerard L. Hanley, Barbra Bied Sperling, and Jack Gunther A case study of a MERLOT-IBM prototype shows the feasibility of a framework for the secure creation, management, and delivery of digital content for education and offers insights into service requirements.
77
Stiquito for Robotics and Embedded Systems Education James M. Conrad A new version of Stiquito, a small robotic insect, features a preprogrammed microcontroller board that students can use to learn about the concepts of robotics and embedded systems.
IEEE Computer Society: http://www.computer.org Computer: http://www.computer.org/computer
[email protected] IEEE Computer Society Publications Office: +1 714 821 8380
OPINION 9
At Random Frames of Reference Bob Colwell
NEWS 14
Industry Trends What Lies Ahead for Cellular Technology? George Lawton
18
Technology News Security Technologies Go Phishing David Geer
22
News Briefs Researchers Develop Memory-Improving Materials ■ Using Computers to Spot Art Fraud ■ Chilling Out PCs with Liquid-Filled Heat Sinks
MEMBERSHIP NEWS 82
Computer Society Connection
85
Call and Calendar COLUMNS
95
Entertainment Computing Utility Model for On-Demand Digital Content S.R. Subramanya and Byung K. Yi
99
Invisible Computing Programmable Matter Seth Copen Goldstein, Jason D. Campbell, and Todd C. Mowry
104
The Profession What’s All This about Systems? Rob Schaaf
D E PA R T M E N T S 4 6 12 54 88 92 93 94 Membership Magazine of the
Article Summaries Letters 32 & 16 IEEE Computer Society Membership Application Career Opportunities Advertiser/Product Index Products Bookshelf
NEXT MONTH:
Multiprocessor SoCs
COPYRIGHT © 2005 BY THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS INC. ALL RIGHTS RESERVED. ABSTRACTING IS PERMITTED WITH CREDIT TO THE SOURCE. LIBRARIES ARE PERMITTED TO PHOTOCOPY BEYOND THE LIMITS OF US COPYRIGHT LAW FOR PRIVATE USE OF PATRONS: (1) THOSE POST-1977 ARTICLES THAT CARRY A CODE AT THE BOTTOM OF THE FIRST PAGE, PROVIDED THE PER-COPY FEE INDICATED IN THE CODE IS PAID THROUGH THE COPYRIGHT CLEARANCE CENTER, 222 ROSEWOOD DR., DANVERS, MA 01923; (2) PRE1978 ARTICLES WITHOUT FEE. FOR OTHER COPYING, REPRINT, OR REPUBLICATION PERMISSION, WRITE TO COPYRIGHTS AND PERMISSIONS DEPARTMENT, IEEE PUBLICATIONS ADMINISTRATION, 445 HOES LANE, P.O. BOX 1331, PISCATAWAY, NJ 08855-1331.
Innovative Technology for Computer Professionals
Editor in Chief
Computing Practices
Special Issues
Doris L. Carver
Rohit Kapur
Bill Schilit
Louisiana State University
[email protected]
[email protected]
[email protected]
Associate Editors in Chief
Perspectives
Web Editor
Bob Colwell
Ron Vetter
[email protected]
[email protected]
Bill N. Schilit Intel
Research Features
Kathleen Swigger
Kathleen Swigger
University of North Texas
[email protected]
Area Editors
Column Editors
Computer Architectures Douglas C. Burger
At Random Bob Colwell Bookshelf Michael J. Lutz
University of Texas at Austin
Databases/Software Michael R. Blaha
US Army Research Laboratory
Web Technologies Simon S.Y. Shim
OMT Associates Inc.
Graphics and Multimedia Oliver Bimber
Embedded Computing Wayne Wolf
Advisory Panel
Bauhaus University Weimar
Princeton University
University of Virginia
Information and Data Management Naren Ramakrishnan
Entertainment Computing Michael R. Macedonia
Thomas Cain
Georgia Tech Research Institute
Virginia Tech
Ralph Cavin
IT Systems Perspectives Richard G. Mathieu
Semiconductor Research Corp.
IBM Almaden Research Center
Networking Jonathan Liu University of Florida
Software H. Dieter Rombach AG Software Engineering
Dan Cooke Texas Tech University
San Jose State University
James H. Aylor
University of Pittsburgh
University of Pittsburgh
Invisible Computing Bill N. Schilit
Edward A. Parrish
Intel
Ron Vetter
The Profession Neville Holmes
Alf Weaver
University of Tasmania
University of Virginia
University of Maryland
Administrative Staff Staff Lead Editor
Judith Prow
Membership News Editor
Managing Editor
[email protected]
Bryan Sallis Manuscript Assistant
[email protected]
James Sanders
Design Larry Bauer Dirk Hagner Production Larry Bauer
Assistant Publisher Dick Price Membership & Circulation Marketing Manager Georgann Carter
Associate Editor
Bill Schilit (chair), Jean Bacon, Pradip Bose, Doris L. Carver, Norman Chonacky, George Cybenko, John C. Dill, Frank E. Ferrante, Robert E. Filman, Forouzan Golshani, David Alan Grier, Rajesh Gupta, Warren Harrison, James Hendler, M. Satyanarayanan
Security Bill Arbaugh
Senior Acquisitions Editor
[email protected]
Chris Nelson
CS Magazine Operations Committee
University of North Carolina at Wilmington
Mary-Louise G. Piner
Senior News Editor
Michael R. Williams (chair), Michael R. Blaha, Roger U. Fujii, Sorel Reisman, Jon Rokne, Bill N. Schilit, Nigel Shadbolt, Linda Shafer, Steven L. Tanimoto, Anand Tripathi
Worcester Polytechnic Institute
Scott Hamilton
Lee Garber
[email protected]
Ron Hoelzeman
St. Louis University
Editorial Staff
Senior Editor
Gerald L. Engel
CS Publications Board Standards Jack Cole
Rochester Institute of Technology
Multimedia Savitha Srinivasan
2005 IEEE Computer Society President
Bob Ward
Executive Director David W. Hennage Publisher Angela Burgess
Business Development Manager Sandy Brown Senior Advertising Coordinator Marian Anderson
Circulation: Computer (ISSN 0018-9162) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 100165997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, PO Box 3014, Los Alamitos, CA 90720-1314; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters,1730 Massachusetts Ave. NW, Washington, DC 20036-1903. IEEE Computer Society membership includes $19 for a subscription to Computer magazine. Nonmember subscription rate available upon request. Single-copy prices: members $20.00; nonmembers $94.00. Postmaster: Send undelivered copies and address changes to Computer, IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement number 40013885. Return undeliverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA. Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in Computer does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.
2
Computer
Published by the IEEE Computer Society
ARTICLE SUMMARIES The Case for Technology in Developing Regions
Project Numina: Enhancing Student Learning with Handheld Computers
Toward an Electronic Marketplace for Higher Education
Eric Brewer, Michael Demmer, Bowei Du, Melissa Ho, Matthew Kam, Sergiu Nedevschi, Joyojeet Pal, Rabin Patra, Sonesh Surana, and Kevin Fall
pp. 46-53
Magda Mourad, Gerard L. Hanley, Barbra Bied Sperling, and Jack Gunther
T
A
pp. 25-38
he United Nations’ Millennium Development Goals, established in 2000, include the following: “Make available the benefits of new technologies—especially information and communications technologies.” A majority of technology’s benefits have been concentrated in industrialized nations and therefore limited to a fraction of the world’s population. The authors believe that technology has a large role to play in developing regions, that “First World” technology to date has been a poor fit in these areas, and that developing regions thus need technology research. Despite the relative infancy of technology studies in developing regions, anecdotal evidence suggests that having access to technology has a beneficial economic impact.
Reflection and Abstraction in Learning Software Engineering’s Human Aspects pp. 39-45 Orit Hazzan and James E. Tomayko
Barbara P. Heath, Russell L. Herman, Gabriel G. Lugo, James H. Reeves, Ronald J. Vetter, and Charles R. Ward convergence of technologies is giving small computing platforms, such as the pocket PC, the ability to support telecommunications, audio and video applications, mathematical computations, word processing, electronic spreadsheets, and standard PDA functions. Wrapped into a single device, the handheld can replace all the traditional electronic hardware students commonly carry in their backpacks. Unfortunately, few high-quality educational applications are currently available for handhelds, especially in mathematics and science. The authors’ extensive experience with commercially available and homegrown software suggests that using handhelds engages students more fully. Yet handheld technology also has many shortcomings. To address these deficiencies, they are developing a mobile learning environment designed to foster collaboration in a virtual learning community.
Architecting Multimedia Environments for Teaching pp. 57-64
A
course designed to increase students’ awareness of the complexity of software development’s cognitive and social aspects can introduce them to reflective mental processes and to tasks that invite them to apply abstract thinking. For the past three years, the authors have taught a Human Aspects of Software Engineering course at both the Technion-Israel Institute of Technology and the School of Computer Science at Carnegie Mellon University. This course aims to increase the students’ awareness of the problems, dilemmas, questions, and conflicts software engineering professionals could encounter during the software development process.
4
Computer
Gerald Friedland and Karl Pauls
M
ost teachers still rely on wellestablished primitive aids such as the chalkboard, one of history’s earliest teaching tools. Since the advent of computational devices in education, researchers have sought the means for properly integrating them and taking advantage of their capabilities. The authors propose building a reliable, ubiquitous, adaptable, and easy-touse technology-integrating black box. By reusing and enhancing components, this system will become increasingly reliable, while a building-block architecture will keep it manageable.
Published by the IEEE Computer Society
pp. 66-75
D
igtial content’s availability has created opportunities for universities and publishers to improve the marketplace. So far, researchers have focused on perfecting a digital rights management system and content-protection schemes, but the real challenge lies in designing end-toend systems that integrate with the changing needs of society and a dynamic business world. As a first step toward designing such a system, IBM and the Multimedia Educational Resource for Learning and Online Teaching developed a framework that IBM designers used to build a prototype ede-marketplace, which IBM and MERLOT evaluated in a field test.
Stiquito for Robotics and Embedded Systems Education pp. 77-81 James M. Conrad
S
tiquito, a small hexapod robot, has been used for years in education. Most Stiquito robots built so far have been educational novelties because they could not be controlled by a programmable computer. Stiquito Controlled, a new self-controlled robot, uses a microcontroller to produce forward motion by coordinating the operation of its legs. Although the controller is sold programmed, educators and researchers can reprogram the board to examine other areas of robotics and embedded system design.
17
HOT Chips 17 A Symposium on High-Performance Chips August14-16, 2005, Memorial Auditorium, Stanford University, Palo Alto, California
ADVANCE PROGRAM
August 15
August 14
Prof. Jim Smith, Univ. Wisconsin Organizing Committee Chair Richard Uhlig, Intel Pradeep Dubey Intel Virtual Machines:Architectures, Implementations, and Applications
Morning Tutorial
Vice Chair Yusuf Abdulghani Apple Finance Lily Jow HP CELL Processor Publicity • A Novel SIMD arch. for the CELL Heterogeneous Chip-Multiprocessor IBM Donna Wilson Donna Wilson & Assoc • IBM CELL Interconnect Unit, Bus and Memory Controller IBM Gail Sachs Telairity • Super Companion Chip w/ A/V oriented interface for the CELL Processor Toshiba • Programming and Performance Evaluation of the CELL Processor Toshiba Advertising Allen Baum Intel Keynote: William Holt, VP & General Manager, Technology & Manufacturing Group, Intel Sponsorship • Facing the Hot Chip Challenge (Again) Amr Zaky Qualcomm Specialized Architectures I Publications • The Magpie:A Low-Power Real-Time Milliflow Aggregation Processor Intel Gordon Garb GHI • Barcelona:A Fibre Channel Switch SoC for Enterprise SANs Cisco • High-Performance Pattern-Matching Engine for Intrusion Detection IBM Registration Ravi Rajamani Sun Advanced Technology Local Arrangements • Photonically Enabled CMOS Luxtera Bob Lashley Sun • 40-GHz Operation of a Single-flux-quantum (SFQ) Switch Scheduler ISTEC Randall Neff Media Processors Aaron Barker Sun • TVP2000: A Real-time H.264 High-Definition Video Processor Telairity Webmaster • Next-Generation Audio Engine Tensilica Galina Moskovkina HP • Nexperia PNX1700 High-Speed Low-Cost Super-Pipelined Media Processor Philips At Large • An Ultra High Performance, Scalable DSP Family for Multimedia Cradle Don Alpert Camelback Arch. Panel: Moderator: Howard Sachs Slava Mach • The Next Killer Application John Mashey Sensei Partners Howard Sachs Telairity Specialized Architectures II UC Berkeley U. of Heidelberg Alan Jay Smith • Low-Power, Networked MIMD Processor for Particle Physics SRE • The Design and Implementation of the TRIPS Prototype Chip Univ. of Texas Bob Stewart
August 16
Tuesday
Monday
Sunday
HOT Chips brings together designers and architects of high-performance chips, software, and systems. Presentations focus on up-to-the-minute real developments. This symposium is the primary forum for engineers and researchers to highlight their leading-edge designs.Three full days of tutorials and technical sessions will keep you on top of the industry.
• Digitally Assisted Analog Circuits Stanford Reconfigurable Processors I • MeP-h1:A 1-GHz Configurable Processor Core Toshiba • Software Configurable Processors Change System Design Stretch • DAPDNA-2:A Dynamically Reconfigurable Processor with 376 32-bit Processing Elements IPFlex • Ascenium:A Continuously Reconfigurable Architecture Ascenium Keynote: David Kirk Chief Scientist NVIDIA • Multiple Cores, Multiple Pipes, Multiple Threads Do we have more Parallelism than we can handle? Reconfigurable Processors II • The Nios II Family of Soft, Configurable FPGA Microprocessor Cores Altera • Configurable Systems-On-Chip with Virtex4, the Latest 90nm FPGA Family Xilinx • Design & Application of BEE2- a High-end Reconfigurable Computing SystemU.C.Berkeley Processors and Systems • Intel 8xx series and Paxville Xeon-MP Microprocessors Intel • TwinCastle:A Multiprocessor North Bridge Server Chipset Intel • Dynamically Optimized Power Efficiency with Foxton Technology Intel • Title To Be Announced! Microsoft
Afternoon Tutorial
Kevin Nowka, Pradip Bose, Sani Nassif Low Power, High Performance Microprocessor Design
IBM
Program Committee Program Co-Chairs Alan Jay Smith UC Berkeley John Sell Microsoft Program Committee Forest Baskett NEA Keith Diefendorff Apple Pradeep Dubey Intel Christos Kozyrakis Stanford Teresa Meng Stanford Chuck Moore AMD John Nickolls NVIDIA Rakesh Patel Altera Tom Peterson MIPS Howard Sachs Telairity Mitsuo Saito Toshiba Kimming So Broadcom Marc Tremblay Sun John Wawrzynek UC Berkeley
This is a preliminary program; changes may occur. For the most up-to-the-minute details on presentations and schedules, and for registration information, please visit our web site where you can also check out HOT Interconnects (another HOT Symposium being held following HOT Chips): Web: http://www.hotchips.org Email:
[email protected]
A Symposium of the Technical Committee on Microprocessors and Microcomputers of the IEEE Computer Society
@
L E T TERS
TACKLING SOCIAL CHALLENGES I always find the The Profession column interesting, but February’s essay (Neville Holmes, “The Profession and the Big Picture,” pp. 104, 102-103) was particularly thought-provoking. Indeed, it can hardly be said that “an effective percentage of the public can understand the relevant scientific evidence and reasoning.” However, by definition, democracy is power to the masses, and the average of any population is not going to be representative of the current peak of human knowledge in any field, scientific or otherwise. Given the complexity of sustaining an enormous number of human beings on this planet with diminishing resources, it should be doubted whether democratic forms of government are intellectually capable of taking the complex and difficult decisions required. However, in Western developed countries, we currently seem to be living in an age where democracy is the end all and be all, the panacea that will solve all problems, and, above all, the one and only great truth that cannot be questioned. New forms of representation and government should be envisioned, otherwise important decisions will be based on media spin and marketing criteria, rather than scientific principles. Andrew Bertallot Bernardston, Mass.
[email protected]
VOTING SYSTEMS The ideas that Thomas K. Johnson puts forward in “An Open-Secret Voting System” (The Profession, Mar. 2005, pp. 100, 98-99) are basically sound. Indeed, the combination of computer-based entry and machinereadable paper ballot is ideal. One aspect of this proposal, however, goes overboard, paving the way to totalitarian regimes that a democratic system is supposed to eliminate in the first place. Indeed, offering a way to return a ballot to the voter also intro6
Computer
duces the possibility that organized groups could control the voting process by simply coercing citizens to give their voter ID when they exit the polls. This has nothing to do with the technology in use: If there is any way to get an untampered voting receipt, a coercing organization could control the votes of its clients by requiring them to give this receipt to the organization. Then, it could verify without impunity how they voted. This is the foremost reason why secret-ballot voting has been established in all democratic countries: Once a ballot is cast, there is almost no way to discover how a given person voted, other than asking him or her. I use the term “almost” because some inference can be made in local elections with a large enough number of candidates based on statistical patterns among a selected group of voters. Therefore, the idea of assigning a unique voter ID must be proscribed from this scheme: There must be no way of tracing a ballot to a given voter, lest democracy vanish. Didier H. Besset Switzerland
[email protected] The author responds: Consider this: There is no practical way to force anyone to vote in his best interests as opposed to giving in to pressure from those who might wish to gain votes through bribes or intimidation. The possibility that a citizen might sell his vote for money has always been with us. There already are ways to produce a copy of a ballot by using small cameras. Published by the IEEE Computer Society
Obtaining votes through bribes or threats must be done one vote at a time, as “retail” vote theft. Ballot-box stuffing, tampering with voting machinery, and so on allow “wholesale” vote theft. In my humble opinion, the benefits of using the “open-secret” system to prevent the wholesale theft of votes outweigh the potential problem of their retail theft, at least in the jurisdictions I am most familiar with—the midwestern United States. Perhaps in other parts of the world, the problems with voter intimidation might be too great to allow using printed receipts. But it occurs to me that there is another way to avoid the problem. If a voter chooses not to keep a copy of his ballot but wants to keep track of his ballot ID number, he could use a cell phone to transmit the number to a trusted friend or relative. Or he could use his phone to send the number via e-mail to his private e-mail address. He could then leave the polling place without having any record of his vote on his person. As I stated in the conclusion to this article, I hope that a discussion of the benefits and problems with my proposal might lead to something even better. The otherwise excellent “An OpenSecret Voting System” was marred by its fourth paragraph, which was a purely political statement that contributed nothing to the thesis of the article. This paragraph suggests that the US should place a higher priority on fixing its own voting systems than on exporting democracy to other nations. The implications of this suggestion are astounding. It means that, in the author’s judgment, it would be better for the Taliban to continue to oppress the people of Afghanistan, especially its women and girls, than to risk an attempt at democracy with less-than-perfect voting systems. It means that it would be better for hundreds of thousands of Iraqi citizens to be tortured or simply “disappeared” than to run the risk of a less-than-perfect vote tally in a peace-
ful, democratic transition of power. Before this year, there was no risk that votes would be miscounted in Afghanistan and Iraq. There was also no risk that women in Afghanistan would enjoy the freedom to be educated or that Shiites in Iraq would enjoy freedom of religion. Now, we do run the risk of vote miscounting. However, who could seriously suggest that living with this risk is worse than living under the prior governments? I would like to make my own suggestion for an improved voting system: After each voter casts a vote, one of the voter’s fingers is marked with indelible ink to prevent fraudulent multiple voting. It’s not very high tech, but it has already been proven to work in two countries. Ted Hills Lambertville, N.J.
[email protected] The author responds: Voting systems and democracy are political subjects by nature. It should not surprise any reader that an essay on the topic might include political statements. Hills’s comments about the people of Afghanistan, oppression of women and girls, no risk of miscounting Iraqi votes, and so forth ignore some relevant facts: After the Iraqi military was defeated in the first Gulf War, the first Bush administration encouraged an uprising of Shiites against Hussein’s government. Then, as the Iraqi government used its remaining forces (including helicopter gunships) to slaughter the rebels, the US stood by and did nothing to stop the bloodshed. Protection of the Shiites’ religious freedom, to say nothing of their very survival, was apparently not politically expedient at that time. But years later, after no weapons of mass destruction were found in Iraq, the second Bush administration decided that the liberation of the Iraqi people was a top priority, or at least, the reason du jour for “regime change.”
The suggestion about marking voters’ fingers with indelible ink may be workable in some settings. However, if there is a group that is intent on deterring voters from showing up at the polls (for example, Iraqi insurgents) the stained thumb could be hazardous to a voter’s health. The inked digit might be quickly amputated along with the offending hand or arm.
SOLVING THE SPAM PROBLEM In his April column (The Profession, “In Defense of Spam,” Apr. 2005, pp. 88, 86-87), Neville Holmes offers an uncommon perspective with regard to spam. I think more people should adopt his view that “Technical solutions are always imperfect—at least to some degree. This provides a compelling reason to improve the technology, not to resort to legislation.” I used that perspective to develop a technology solution for resolving the spam problem. In general, the current solutions can be described as filtering out spam. My concept is based on filtering in real-e-mail. It is impossible to determine if an email message is spam after it is sent. Instead, the solution—e-mail sender verification—is to provide a simple, easy to deploy way to verify real emails with embedded tags before sending them. The technology needed for verification is much simpler than authentication. Tampering with the tags can easily be detected, providing a simple means of differentiating between spam and real e-mail messages on the receiver side. The tag itself can be tampered, but the tampering is easily detected.
This solution will be 100 percent effective if both the sender and the recipient use it. Additional details are available at http://strategygroup.net/ esv/. Another issue encourages continuation of the spam problem: In some cases, adoption of a real solution for spam conflicts with business priorities. Any simple and effective solution to the spam problem would decrease the revenues of companies that sell antispam software, driving some into bankruptcy and leaving their consultants jobless. For more information on the strong efforts to keep this market growing, see “Filling the Gap Between Authentication and Spam Protection” by Hadmut Danische at http://www.ftc. gov/bcp/workshops/e-authentication/ presentations/11-10-04_danisch.pdf. George Mattathil San Bruno, Calif.
[email protected] Neville Holmes is a brave man to take a rational approach to spam. I hope his comments don’t generate too many irate letters from people who always think a new law will solve their problems. My company and its competitors believe they have a solution to spam, albeit one that some SMTP hold-outs have yet to accept. Verizon has been sued for lost messages, so contract and constitutional laws may preclude ISP filtering as a solution. Unlike SMTP, newer technologies can send XML messages securely using encryption for both storage and transmission. All senders are authenticated, and users can’t forge message headers. Each message has tracking, archiving, auditing, message retention, and electronic signatures built-in. Because it’s a “closed network that’s open to all,” a problem user can be restricted from sending through the network. It uses a federated architecture like regular email, though each node is authenticated as well, so rogues can’t just set up a cheap server and hope to join the network. Marketing messages are June 2005
7
Letters
allowed because that’s the way the business world works, but spamming is definitely prohibited, and when uncovered, such senders can be cut off immediately. Converting SMTP to something more secure and reliable will take time, just as e-mail itself took decades to grow. Businesses will drive the growth of better technologies more effectively than myriad, unenforceable laws. Email will become the CB radio of messaging, something left to the masses while businesses migrate to more trusted solutions. David Wall Kirkland, Wash.
[email protected] Neville Holmes responds: I would like to think of it as adherence to professional principles rather
than bravery. Anyway, living in Tasmania makes such bravery easy. From the opening paragraph of “In Defense of Spam,” I get the impression that Neville Holmes thinks that spam is not a serious problem, but I can assure him that it is. At last check, my account was getting more than 2,000 spam messages an hour, and the spam was actually coming in faster than my e-mail program could delete it. Even if the individual messages are quite short, this still represents an incredible amount of traffic, and it puts a significant financial load on all the parties involved in getting it to me. I formerly ran an antivirus company, and I can only assume that some hacker thinks I still do and is trying to mount a denial of service attack.
Call for participation 13th IEEE Int. Requirements Engineering Conference http://www.re05.org
August 29th – September 2nd 2005, Paris, La Sorbonne, France Engineering Successful Products Steering Committee Chair: Roel Wieringa U. Twente, Netherlands
General Chair: Colette Rolland U. Paris 1-Panthéon Sorbonne, France
Program Chair: Joanne Atlee U. of Waterloo, Canada
RE’05 will bring together researchers, practitioners, and students to exchange problems, solutions, and experiences concerning requirements. The conference will emphasize the crucial role that requirements play in the successful development and delivery of systems, products, and services. Technical and Industrial program: The main conference will run from August 31 to September 2 and will feature parallel sessions of technical papers, mini-tutorials, and research-tool demonstrations. Research papers explore new ideas for how to understand and document requirements. The practitioner track is a special series of paper sessions and invited talks that focuses on industrial experiences with requirements-related problems and methods. Workshops, Tutorials and a Doctoral Symposium will be held August 29-30. Tutorials cover stateof-the-art requirements engineering methods, techniques, and tools. Workshops provide a forum to exchange ideas and to discuss novel and ground-breaking requirements engineering approaches. Deadlines for workshop papers range from end of May to mid-June. See the conference web site for details. The list of the twelve workshops and nine tutorials is available at the RE’05 web site. Keynote speakers: Jean-Pierre Corniou, CIO at Renault, author of “The Knowledge Society”, Hermès, 2002, 1999 IT Manager of the Year award from the French press, president of the CIGREF (Club Informatique des Grandes Entreprises Françaises)
Daniel Jackson, Professor of Computer Science at MIT, Chair of The National Academies study on “Sufficient
Fortunately, most of the spam is of the type “random address”@provider, so I had my host bounce anything that is not addressed to either “my name” or “enquiries.” This, together with a few simple rules, has reduced the spam to a manageable level. I agree that attempts to ban spam are unlikely to succeed, but I also believe that there is a very simple commercial solution to the problem. Junk snail mail is a nuisance, but it is tolerable because the senders must pay for it, so they will only send it if there seems to be a reasonable probability of making a profit. On the other hand, because of a historical accident, spammers don’t have to pay to send their messages, so they can operate profitably even though their success rate is extraordinarily low. If we modify the e-mail protocols so that the sender has to pay a small sum for every e-mail he sends, spam would soon be reduced to manageable levels. More details are available at http:// www.corybas.com/misc.htm. Roger Riordan Director, Cybec Foundation
[email protected] Neville Holmes responds: It seems quite a few readers inferred that by defending spam I was denying that there was a problem. My viewpoint actually is that spammers are the real problem, that the spam symptom would best be solved by technical and commercial measures such as Roger Riordan suggests, and that the governments of the world should be concerned about why people want to deceive and rob other people. This last point will perhaps be made clearer by an essay that is available online at society.guardian.co.uk/Print/0,51846 35-105909,00.html.
Evidence? Building Certifiably Dependable Systems” Suzanne Robertson, Independent consultent, author of ‘Mastering the Requirement Process’ (Addison-Wesley, 1999) and Requirement-Led project Management (Addison-Wesley, 2005), co-founder of the Atlantic Systems Guild
RE’05 Organizing Committee believes that the Sorbonne area, which is a five-minutes walk from the Seine river and the Ile de la Cité, and a two-minutes walk from the Luxembourg gardens, will provide a unique environment to meet worldwide experts, industrials, and scientists, to share experiences in requirements engineering, as well as to appreciate our famous French cooking and good wine. Register at www.re05.org
8
Computer
Early bid until July 31st 2005
We welcome your letters. Send them to
[email protected].
A T
Frames of Reference Bob Colwell
n amazing thing happened recently: A nonengineering person publicly praised what our profession has accomplished. Marilyn vos Savant writes a weekly column for Parade magazine, part of the Sunday newspaper for many people in the US. Vos Savant reportedly has the highest IQ ever recorded. Even if you believe that IQs don’t mean much, vos Savant corroborated her brilliance when she stated in her 1 May 2005 column that engineers are most responsible for our technological society and as a group are underappreciated. I knew I liked her.
A
Different words invoke different frames and cause listeners to think in much different ways.
THE ARTS/SCIENCE CHASM A few weeks ago, William Wulf, president of the National Academy of Engineering, came to town and delivered a lecture. Wulf commented that he had been pondering C.P. Snow’s famous “two cultures” lecture from 1959, in which Snow pointed out the chasm between practitioners of the arts and the sciences. Most people can easily see that there are value differences between these two groups. Artists take it as a given that there are no absolute points of reference in music, sculpture, or painting, and artists, their peers, and the buying public must make value judgments. Artists who try to “follow the rules” too closely tend to need part-time jobs in the service industry—as do artists who break too many of them. Scientists find the prospect of relative truth utterly distasteful and believe
their ultimate arbiter is Nature. Indeed, they’ve learned this lesson the hard way: There are many instances in the history of science in which various fields went astray because of the influence of scientists whose opinions simply were wrong—Blondlot’s N-rays, for example. Within the scientific realm, scientists are entirely correct to aspire to something more than peer approval for confirmation of their ideas. While both groups are correct—or at least mostly consistent—from within their respective frames of reference, finding value from their mutual diversity has been an elusive goal. More than 40 years later, firefights are still being waged over which of these two camps needs to grow up and learn the other’s language. Published by the IEEE Computer Society
R A N D O M
There have even been guerilla actions. Alan Sokal, an NYU physicist, became convinced that the issue wasn’t just reference frames, but a simple lack of intellectual discipline. To “prove” his point, Sokal submitted a bogus article to a cultural studies journal, the chief attribute of which was to flatter the editors’ preconceptions. If C.P. Snow thought he had stuck a pole into a hornet’s nest, Sokal must have felt like he’d knocked it down, kicked it around, and then superglued it to his head. And those he infuriated from the softer sciences would have had no sympathy for him. What Snow noticed was that the gap wasn’t just vocabulary, although that often was (and is) a contributing factor. Snow realized that there were differing frames of reference at work. Effective communication requires that the transmitter have some model in mind for the receiver. When I write about engineering in this column, I presume that my readers understand a lot about the computing field, its aims, limits, and place in society—as well as technology, science, and many other things. With those assumptions, I can safely refer to events or ideas without having to offer extensive preliminary explanations. I write from a certain frame of reference and to a certain frame of reference, and because those frames are closely aligned, I can try to convey ideas in an efficient and—if I do it right—generally unambiguous fashion.
A BLEEPING MOVIE But if a Camp A person wants to convey a subtlety to a Camp S person, it is by no means safe to make such assumptions. This point came home to me recently while I was watching What the Bleep Do We Know!? (www. whatthebleep.com), a movie that is an odd concoction of quantum physics and New Age mysticism. During the first half of the movie, the heroine moves dreamily through a disjointed plot that seems mostly to serve as a tenuous scaffold on which to hang June 2005
9
At Random
quantum vignettes and mini-sermons on how weird physics is. I rather liked this part because quantum physics is weird, weird in ways most people can’t even imagine. It’s fun to watch your fellow viewers get that stunned “No way!” look on their faces when they realize the physicist isn’t kidding about how things work. My approval of the film abruptly ended about three-quarters of the way through, however, when I noticed that in between the physicists talking about entanglement, one particular interviewee kept reappearing, and I couldn’t make heads or tails out of what she said. Eerily, she was precisely mimicking the physicists’ intonations, facial expressions, and utter confidence, but to me she was speaking utter gibberish. It suddenly dawned on me: She wasn’t a physicist—she was some kind of New Age mystic who had borrowed the physicists’ language and was happily doing free-association between quantum physics and her personal religious beliefs. I don’t begrudge this person her opinions; she’s entitled to hold them. But I don’t think she’s entitled to inflict them on unsuspecting viewers as if they’re every bit as valid and well-supported as what the physicists were saying. Quantum mechanics is one of the finest achievements of the human mind. Einstein, Bohr, Fermi, Feynman—some of the smartest humans ever born spent their entire lives teasing out some of Nature’s best-hidden secrets. That they were geniuses doesn’t mean they were right. But the fact that I’m typing this on a computer that works as advertised means that if they were wrong, it’s not by much. When the physicists in the movie were explaining a concept, the ideas they were describing represented the collective mutual understanding of whole generations of scientists—it wasn’t just their personal speculations. I believe the scientific process elevates quantum mechanics to an entirely different plane than one person’s random opinions. 10
Computer
The issue here isn’t whether pure science and practical engineering are “better” than the soft stuff such as spirituality, cultural studies, and the arts. I’m not even sure that’s a useful issue to try to nail down, although I remember disposing of huge amounts of time in college trying to do so.
Facts by themselves don’t mean anything at all.
The issue is that a Camp A denizen, no matter how well intentioned, should not attempt to reach across the chasm and begin using terms from the other camp without also adopting Camp S’s reference frame. This would have precluded the mystic from saying what she said in the movie, but that’s my point and C.P. Snow’s point as well: The words don’t work in the absence of a suitable frame of reference. Borrowing words without the frame does not constitute communication— it constitutes provocation.
UNDAUNTED COURAGE They say never to argue politics or religion with people you don’t know, and with the above discussion I have probably just walked perilously close to the religion part. To prove that I don’t know when to quit, I will now plunge into the politics arena with the same combination of derring-do, aplomb, and well-meaning dumb luck. Many US citizens were surprised by the outcome of the 2004 presidential election—it seemed to them that most voters had inexplicably voted against their own best interests and that no facts to the contrary would dissuade them. In his book titled Don’t Think of an Elephant! Know Your Values and Frame the Debate (Chelsea Green, 2004), George Lakoff attempts to analyze what happened and to explain the inexplicable.
Lakoff says that most people intuitively believe that when well-meaning opponents argue an issue, the facts will emerge, and the right thing to do will become apparent from the way those facts stack up. But Lakoff says that belief is wrong. He points to results from research in the cognitive science community indicating that facts by themselves don’t mean anything at all. The words used in describing those facts do, and those words invoke frames. One of Lakoff’s examples is, “What do you call it when an existing tax rate is lowered?” The term “tax cut” might come to mind, and the US government has in fact used that term at various times, especially in the Reagan era. But nowadays, lowering the rate is called “tax relief.” Whether lowering the tax rate is a good idea or not isn’t at issue here— the issue is that renaming the process invokes a new mental frame. Where a “tax cut” might or might not be a good thing—after all, most people do realize that the government uses tax monies for essential services such as defense—“tax relief” is like “pain relief.” It’s automatically favorable unless there are extenuating circumstances. Same reality, but the different words invoke different frames and cause the listeners to think in much different ways. Moreover, argues Lakoff, if one side in a debate has successfully invoked a particular cognitive frame, the other side must be extremely careful in how they respond. If the response uses the opposing side’s frame, the words and facts that follow won’t matter: The listeners will discard any facts that aren’t congruent with the frame. Instead, the debaters must invoke their own frames and get the listeners to follow them in transitioning from one to the other. If you voted for George W. Bush, you might not want to read Lakoff’s book. He goes far beyond intellectual discussion of cognitive frames into political agendas that aren’t appropriate to this column. Everyone else, read it—it’s short, and it packs a punch.
I found this book profoundly depressing because I’m not sure a democracy can ultimately survive when manipulation has so thoroughly taken the place of reasoned discourse. Yet if manipulation is the only effective tactic, how can we avoid using it?
WE MUST MAKE THE FIRST MOVE Bill Wulf says one of the tasks of the National Academy of Engineering is to “speak truth to power.” I’m both reassured and troubled when I hear Wulf say that most public servants he has had contact with, and all of those in the highest levels, are well-meaning as well as extremely intelligent and capable. Reassured, because even when they do really stupid-looking things, at least they’re trying. And troubled for exactly that same reason.
When we stop questioning our truth, it has become dogma, and we have betrayed the scientific process.
I felt like throwing my car radio out the window a few weeks ago when I heard a local school board official expressing his views on whether creationism ought to be taught in schools along with evolution. He said, “Evolution is only a theory. It’s not a fact. Creationism is a theory too. So therefore it follows that we should present them together as just two theories.” If the people in charge of education have such a poor grasp of science, how can they possibly make intelligent decisions, no matter how hard they work and no matter how well-intentioned they are? When Wulf says that engineers “speak truth,” he means scientific truth—truth as best we know it. That’s all the truth we’ll ever have in this world, but when we stop questioning it, that’s when our truth has become dogma and we have betrayed the scientific process. That all scientific
truths are in some sense provisional is a strength of the process, not some kind of weakness. We must not allow ignorance of that fact to devalue it. Wulf recounted several instances in which an Academy of Engineering study had produced results that were, er, “imperfectly aligned” with prevailing administration policies. Global warming, for example. Had Wulf’s analysis teams come up with a conclusion that global warming wasn’t real, the Academy would have been lauded as heroes—and in all likelihood the policymakers wouldn’t have questioned it. Instead, the Academy’s study concluded exactly the opposite and engendered the usual denial, anger, ineffective refutation, and disregard. But Wulf is still hopeful that eventually the administration will come around. Speaking truth to power isn’t always immediately appreciated by power. One cool thing about science is that when you get it right, it stays right, even if it takes people some time to come around. The not-so-cool thing is that we don’t have an indefinite amount of time to puzzle this out.
eanwhile, forewarned is forearmed. We in the technology business must realize we’re all in the service of truth, an absolute truth that must be guarded, developed, and continually explained to people who need our knowledge and intuition but who may not have the frame of reference needed to understand it. Someone must learn the other side’s frames and bridge that gap, and the Camp A folks can’t do it without the technical training that they don’t have and don’t realize is necessary. If the arts/science gap is to be bridged, it will be we technologists who bridge it. ■
M
Bob Colwell was Intel’s chief IA32 architect through the Pentium II, III, and 4 microprocessors. He is now an independent consultant. Contact him at
[email protected].
REACH HIGHER Advancing in the IEEE Computer Society can elevate your standing in the profession. Application to Seniorgrade membership recognizes ✔ ten years or more of professional expertise Nomination to Fellowgrade membership recognizes ✔ exemplary accomplishments in computer engineering
GIVE YOUR CAREER A BOOST ■
UPGRADE YOUR MEMBERSHIP www.computer.org/ join/grades.htm June 2005
11
1973 1989 1973 1989 •
3 2 & 16 YEARS AGO
•
JUNE 1973 MINICOMPUTERS (p. 11). “Not long ago, the digital computer was commonly regarded as a replacement for many human activities. Accordingly, many computer systems over the past 20 years have been designed and justified on the basis of how many people they would replace. Although the value of those computer systems has been substantial, we now realize that perhaps the most important and powerful computer applications will involve the use of a computer as an extension of the human intellect rather than a replacement for it. “The concept of personal computing is not hard to visualize. It involves placing the power of a computer at the disposal of individuals on an interactive, easy-to-use, and economical basis. However, it is difficult to implement that concept. Minicomputers, time sharing and more recently, pocket calculators, have brought the power of the computer to millions, but we’re still a long way from the personal computer. “The recent development of two new technologies has taken us closer to a practical personal computer. One of these, LSI microprocessors, will make the cost of computing power feasible for more people and more applications. The development of the flexible disk file and IBM’s recent announcement to adopt the flexible disk as a standard data medium may fill the need for a reliable, low-cost storage device.” MICROPROCESSORS (p. 19). “LSI microprocessors can replace minicomputers in many present applications to give less expensive, more reliable, and more compact systems. Also, by microprogramming they can be tailor-made for specific, large volume applications, resulting in still greater savings and efficiency. As system designers learn how and when to use them, microprocessors will become common in these applications as well as in those that were not technically and economically practical before LSI. In fact, they may revolutionize data processing in the next decade, just as minicomputers did in the last, bringing full computing power one more step closer to widespread use for everyday tasks.” FLEXIBLE DISKS (p. 26). “IBM’s selection of the flexible disk as the data entry medium for the future will have enormous repercussions in the data processing industry. For large computer systems it means that keypunches, card handling equipment, key-to-tape and key-to-disk equipment may be replaced by equipment using Diskettes. For many minicomputer systems, it means that the cost of the peripherals, which today amount to the lion’s share of the system cost, can be substantially reduced. Most importantly, it establishes as an industry standard a medium that can be used as a multipurpose miniperipheral and which, when combined with LSI microprocessors, can do many general and special purpose data processing tasks at a fraction of their cost today.” 12
Computer
•
16-BIT MICROPROCESSOR (p. 41). “The semiconductor industry’s first single-card 16-bit microprocessor system, the IMP-16C, has been introduced by National Semiconductor Corp., Santa Clara, Calif. “IMP-16C, a low-cost system on an 8-1/2” ? 11” printed circuit card, consists of a 16-bit microprocessor; clock system; I/O bus drivers; 256 words of read/write memory (RAM); and provisions for 512 words of ROM/PROM memory.” “IMP-16C uses a standard set of 42 instructions and operates on a microcycle of 1.5 microseconds. It can address up to 65,000 words of memory. The system uses a simple bus structure; memory address bus; and separate memory and I/O data bus.” MEMORY ARRAY (p. 44). “Strands of magnetically coated tungsten wire about the diameter of a human hair are being fabricated by General Electric aerospace engineers into smaller and more versatile computer memories for guiding supersonic aircraft and distant space probes with greater control.” “Plated wires are preferred as main memories in aerospace and industrial control applications because of their small size, higher speeds, low power requirements, reliability, long life, non-destructive readout, and non-volatility (stored information is not affected by power interruptions). By contrast, core and semiconductor memories are preferred for use in general purpose commercial computers because of cost considerations.” SATELLITE TRACKING (p. 46). “What connoisseurs of computers would call a ‘vintage’ machine is doing up-to-date research in developing methods to identify orbiting artificial satellites. Part of an experimental system devised by the Techniques Branch of the Air Force Systems Command’s Rome Air Development Center, … the computer, a Digital Equipment Corporation PDP-1, is used to control the antennas that track the satellites, helping the system to establish the elements of orbits by obtaining the coordinates … of the satellites.” “The PDP-1, which has been ‘on duty’ for over four years, was acquired from another Air Force facility that was ‘retiring’ the machine after years of service.” AIRPORT COMPUTER (pp. 46-47). “Electronic message switching equipment to supply complete passenger information to all airports in the Moscow area will be furnished by a French subsidiary of International Telephone and Telegraph Corporation. The value of the project is about $2.2 million. “The equipment, to be supplied to Aeroflot by ITT’s Companie Générale de Constructions Téléphoniques, will retain all needed passenger information in a central computer and will instantaneously display it to the public. Such
Published by the IEEE Computer Society
data as arrival time, departure time, check-in times, luggage, and passenger boarding times, and information on connecting flights, with last-minute changes, will be available.” MEDICAL COMPUTER (p. 47). “The 240-bed Anaheim Memorial Hospital in Anaheim, California has installed a new $140,000 computer system that is serving its unique Acute Care Center. “Designed specifically for Anaheim Memorial Hospital by Beehive Medical Electronics, Inc., of Salt Lake City, the Beehive MCS 100 computer system is located on the fifth floor of the complex’s new wing and serves the four vital areas of the Acute Care Center—coronary, progressive care, medical-surgical and pulmonary. “The first system of its kind that utilizes an arrythmia [sic] technique to monitor a patient’s heart patterns and the repetivity of his pulse, the Beehive MCS 100 can monitor the heart performance of up to 32 patients simultaneously. The system can even be used to predict coronary difficulties.”
JUNE 1989 SMART MACHINES (p. 4). “At the Computer Museum in Boston, Massachusetts, more than half an acre of hands-on and historical exhibits demonstrate the extraordinary changes in the size, capability, application, and cost of computers in the last four decades. Museum galleries span computer technology from vacuum tubes, to transistors, to ICs, and, since the opening of the Smart Machines Gallery in June 1987, to robotics and artificial intelligence. “Within the Smart Machines Gallery are 25 interactive exhibits and the Smart Machines Theater, where more than 25 research robots ‘come to life’ in a 10-minute multimedia presentation.” AUTONOMOUS MACHINES (p. 14). “The first level [of autonomy] is teleoperation—the extension of a person’s sensing and manipulating capabilities to a remote location … .” “The next level, telesensing, involves sensing additional information about the teleoperator and task environment and then communicating this information [so that] the human operator … feels physically present at the remote site. “The third level is that of telerobotics. … The subordinate telerobot executes the task based on information received from the human operator and augmented by its own artificial sensing and intelligence.” AN AUTONOMOUS ROBOT (p. 30). “An autonomous mobile robot, HERMIES-IIB (hostile-environment robotic machine intelligence experiment series) is a self-powered, wheel-driven platform containing an on-board 16-node Ncube hypercube parallel processor interfaced to effectors and sensors through a VME-based system containing a Motorola 68020 processor, a phased sonar array, dual manipulator arms, and multiple cameras.”
A LEGGED VEHICLE (p. 59). “The Adaptive Suspension Vehicle is an experimental, six-legged walking vehicle developed jointly by the Ohio State University and Adaptive Machine Technologies. Designed to explore the capabilities of legged vehicles, the ASV can walk in any direction and turn about any axis. It can walk through mud, step over ditches, and negotiate vertical ledges that would stop a conventional vehicle. Unlike conventional vehicles, which require virtually continuous supporting surfaces, the ASV can traverse terrain where footholds are sparse and avoid rocks, holes, and other obstacles in its path. The vehicle’s suspension can therefore adapt to the terrain, justifying its name.” NEURAL LEARNING (p. 74). “To summarize, we have presented a new theoretical framework for adaptive learning using neural networks. Continuous nonlinear mappings and topological transformations, such as inverse kinematics of redundant robot manipulators, are its main application targets. Central to our approach is the concept of terminal attractors—a new class of mathematical constructs that provide unique information processing capabilities to the neural system. The rapid network convergence resulting from the infinite local stability of these attractors enables the development of fast neural learning algorithms, an essential requirement for manipulator control in unstructured environments.” PARALLEL PROCESSING LANGUAGE (p. 110). “Strand Software Technologies … now markets its general-purpose programming language for parallel and distributed processing in the U.S. Strand88 code is implicitly parallel, … and software can be embedded in a Strand88 program to extract latent parallelism. This reportedly provides a migration path for large sequential applications to move to a parallel processing environment. “The company claims that Strand88 is the first commercial implementation of the Strand language, developed in the U.K. It includes a development environment for the programmer. Strand88 is currently available on Sun-3 and Sun4 workstations, all configurations of Intel iPSC/2, and System V Unix/80386 systems.” WEBS OF INFORMATION (p. 112). “Brown University’s Institute for Research in Information and Scholarship (IRIS) has announced the availability of IRIS Intermedia. The multiuser hypermedia development system facilitates the creation of webs of information consisting of text, graphics, time lines, and scanned images. “Intermedia allows multiple overlapping windows, lengthy documents, navigation aids, and a multiuser environment.”
Editor: Neville Holmes;
[email protected].
June 2005
13
INDUSTRY TRENDS
What Lies Ahead for Cellular Technology? George Lawton
T
he ongoing goal of cellular services providers has been to make networks faster to enable new revenue-producing Internet-access and multimedia- and data-based broadband services, in addition to telephony. For example, carriers want to offer mobile Internet services as fast as those provided by cable- and DSL-based wireline broadband technologies. This process has taken the industry through various generations of radiobased wireless service: After first-generation (1G) analog cellular service, they have offered 2G, 2.5G, and, since 2001, 3G digital technology. However, 3G has disappointed many in the industry because of its high implementation costs and slow adoption and because its initial deployments didn’t support services that carriers wanted to offer, said analyst Andy Fuertes with Visant Strategies, a market research firm. As carriers upgrade their 3G offerings, they are looking perhaps five years ahead to 4G services, which would be based on the Internet Protocol and support mobile transmission rates of 100 Mbps and fixed rates of 1 Gbps. Presently the subject of extensive research, 4G would enable such currently unavailable services as mobile high-definition TV and gaming, as well as teleconferencing.
14
Computer
Wireless companies are thus preparing the transition to 4G from 3G, which could include 3.5G technologies, as the “Cellular Technology Hits the Road Again” sidebar explains. Some carriers are looking at new technologies such as IEEE 802.20. Many providers, though, are simply upgrading the wireless technology they’re already using, to avoid having to change their networking infrastructure. However, this would continue the current problematic situation in which providers throughout the world work with incompatible cellular technologies. “It means more fragmentation, so roaming could be more difficult,” Fuertes said. “Component makers would also lose some economies of scale.”
FOUR CELLULAR PATHS There are four categories of nextgeneration wireless technologies, typically implemented via chipsets, radio transceivers, and antennas. Published by the IEEE Computer Society
GSM The 2G Global System for Mobile Communications (GSM) technology, implemented in much of Europe and Asia, is based on time-division multiplexing. In GSM, TDM increases bandwidth by dividing each cellular channel into eight time slots, each of which handles a separate transmission. The channel switches quickly from slot to slot to handle multiple communications simultaneously. GSM-based wireless services include 2.5G general packet radio service; 2.5G enhanced data GSM environment (EDGE); 3G wideband CDMA (WCDMA), used in the Universal Mobile Telecommunications System (UMTS); and 3.5G High-Speed Downlink Packet Access. All are currently in use except HSDPA, which is in trials. “This year,” said analyst Will Strauss with Forward Concepts, a market research firm, “we are forecasting that traditional GSM sales will be down by 20 percent in units and that UMTS- and EDGE-based technologies will be sharply up.” WCDMA, currently deployed in Europe and Japan, uses a 5-MHz-wide channel, which is big enough to enable data rates up to 2 Mbps downstream. The technology also increases GSM’s data rates by using higher-capacity CDMA, instead of GSM’s usual TDMA, modulation techniques. However, WCDMA uses different protocols than CDMA and is thus incompatible with it. HSDPA uses a higher modulation rate, advanced coding, and other techniques to improve performance. Otherwise, HSDPA is similar to and can be implemented as just a software upgrade to a WCDMA base station, said Vodafone spokesperson Janine Young. Both technologies operate in the 2.1-GHz frequency range. HSDPA offers theoretical and actual download rates of 14.4 Mbps and 1 Mbps, respectively. But, Visant Strategies’ Fuertes noted, HSDPA addresses only downstream transmissions. HSDPA networks use existing
UMTS approaches for the network’s uplink. Thus, HSDPA supports applications that primarily require one-way highspeed communications such as Internet access but doesn’t support two-way high-speed communications such as videoconferencing. High-Speed Uplink Packet Access technology will provide faster uplink speeds when finalized. Carriers such as the UK’s Orange, O2, and Vodafone; Japan’s NTT DoCoMo; and the US’s Cingular Wireless have already started trials of the technology on their networks. Fuertes predicts deployment will begin next year. Telecommunications vendors such as LG Electronics and Nortel Networks are running trials of HSDPA equipment, such as base stations and handsets.
CDMA The 2G code-division multipleaccess technology, developed by Qualcomm and used primarily in the US, doesn’t divide a channel into subchannels, like GSM. Instead, CDMA carries multiple transmissions simultaneously by filling the entire communications channel with data packets coded for various receiving devices. The packets go only to the devices for which they’re coded. CDMA-based wireless services include the 2G CDMA One and the 3G CDMA2000 family of technologies. All CDMA-based approaches operate at the 800-MHz or 1.9-GHz frequencies. CDMA2000 1x, sometimes called 1xRTT (radio transmission technology), is at the core of CDMA2000. It runs over 1.25-MHz-wide channels and supports data rates up to 307 Kbps. While officially a 3G technology, many industry observers consider 1xRTT to be 2.5G because it’s substantially slower than other 3G technologies. CDMA2000 1xEV (Evolution) is a higher-speed version of CDMA2000 1x. The technology consists of
Cellular Technology Hits the Road Again Early mobile phones used first-generation (1G) cellular technology, which was analog, circuit-based, narrowband, and suitable only for voice communications. The dominant wireless-networking technology during the past few years has been 2G, which is digital and narrowband but still suitable for voice and limited data communications. In response to disappointment with initial 3G approaches, network equipment vendors such as LG Electronics and Nortel Networks are developing faster 3G technologies. Carriers hope faster data rates will yield revenue from new services such as videoconferencing and mobile video and audio streaming. Wireless carriers also hope to offer lucrative data services comparable to those provided by wireline networks but based on the lower-cost, wire-free infrastructure. The resulting lower prices could encourage more consumers to use wireless services for all communications.
CDMA2000 1xEV-DO (data only) and 1xEV-DV (data/voice). EV-DO separates data from voice traffic on a network and handles the former. Initial versions support theoretical maximum rates of 2.4 Mbps downstream and 153 Kbps upstream. Recent revisions to EV-DO and EV-DV will support 3.1 Mbps downstream and 1.8 Mbps upstream theoretical maximum rates. Real-world rates are about half as fast, according to Forward Concepts’ Strauss. Vendors, led by Sprint and Verizon and joined by carriers such as Japan’s KDDI and Brazil’s Vivo, have supported EV-DO. This is driving down equipment costs, which should encourage further adoption. Carriers haven’t yet started developing EV-DV. The latest EV-DO revision reduces the maximum transmission latency from 300 to 50 milliseconds, making it more suitable for Internet telephony, which requires near-real-time responses. Strauss predicts the new revision won’t be ready for implementation until 2008. The International Telecommunication Union (ITU) recently approved the 3.5G CDMA2000 3x, also called CDMA Multicarrier. CDMA2000 3x would use a pair of 3.75-MHz-wide channels and is expected to provide high data capacity with transmission
rates of 2 to 4 Mbps. No companies are deploying CDMA2000 3x yet.
WiMax and WiBro WiMax (worldwide interoperability for microwave access) technology, based on the IEEE 802.16 standard, promises global networks that could deliver 4G performance before the end of this decade. IEEE 802.16d, approved last year, has received support from numerous companies including Alcatel, Intel, and Samsung. However, the standard supports only fixed, point-to-multipoint, metropolitan-area-network technology that works via base stations and transceivers up to 31 miles away. WiMax is fast in part because it uses orthogonal frequency-division multiplexing. OFDM increases capacity by splitting a data-bearing radio signal into multiple sets, modulating each onto a different subcarrier—spaced orthogonally so that they can be packed closely together without interference—and transmitting them simultaneously. The IEEE 802.16e Task Group and companies such as Intel expect to finish work next year on 802.16e, which adds mobility to WiMax by using a narrower channel width, slower speeds, and smaller antennas. The Task Group is still working on various aspects of the standard. Also, June 2005
15
I n d u s t r y Tr e n d s
nies on both sides have agreed to merge the two technologies in IEEE 802.16e. If the IEEE finishes work on 802.16e this year, said Intel spokesperson Amy Martin, “in the 2007 to 2008 time frame, we will see it in mobile phones and PDAs.”
120
100 PDC/PHS
Revenue (millions of US dollars)
CDMA 1XEV-DO TDMA
80
IEEE 802.20
CDMA2000 1xRTT Other
HSDPA 3.5G
60 WCDMA EDGE 40 CDMA GPRS 20
GSM 0
2001
2002
CDMA: EDGE: EV-DO: GPRS: GSM: HSPDA: PDC/PHS: RTT: TDMA: WCDMA:
2003
2004
2005
2006
code division multiple access enhanced data GSM environment evolution, data only general packet radio service Global System for Mobile Communications high-speed downlink packet access personal digital cellular/personal handyphone system radio transmission technology time-division multiple access wideband CDMA
2007
2008
Source: Forward Concepts
Figure 1. Future revenue generated by the sale of handsets that use the various cellular approaches will reflect the growing popularity of newer technologies, predicts Forward Concepts, a market research firm.
said Visant Strategies’ Fuertes, “WiMax has yet to implement features, such as mobile authentication and handoffs, that are pivotal to a mobile network. This will take time and testing.” IEEE 812.16e will operate in the 2 to 6 GHz licensed bands and is expected to offer data rates up to 30 Mbps. In South Korea, LG Electronics and Samsung have developed a technology called WiBro (wireless broadband), 16
Computer
designed for use in the 2.3-GHz frequency band. Proponents such as Samsung advocated basing IEEE 802.16e on WiBro. Intel and some other WiMax supporters opposed this because WiBro uses a different frequency band and carrier structure than IEEE 802.16. An ongoing disagreement could have created two rival technologies, thereby splitting the market and inhibiting adoption. However, compa-
An IEEE effort led by Flarion Technologies and supported by vendors such as Lucent Technologies and Qualcomm is developing a cellular standard—based on Flarion’s FlashOFDM—that could handle voice, multimedia, and data. IEEE 802.20 will be a packetswitched technology operating between 400 MHz and 3.6 GHz that could offer optimal data rates of 6 Mbps downstream and 1.5 Mbps upstream. Flash-OFDM, implemented primarily in Europe, works with OFDM and fast-frequency-hopping spread-spectrum technology, which repeatedly switches frequencies during a radio transmission. This sends a signal across a much wider frequency band than necessary, spreading it across more channels on a wider spectrum and increasing signal capacity. In real-world systems, the average user would experience about 1 Mbps of downstream and 500 Kbps of upstream bandwidth, noted Ronny Haraldsvik, Flarion’s vice president of global communications and marketing. According to Fuertes, major vendors are presently focusing on other cellular technologies, which has put IEEE 802.20 on the back burner.
OTHER EFFORTS NTT DoCoMo and Vodafone, the carriers that have invested most heavily in GSM-based technologies, have joined forces with companies such as Cingular and China Mobile to study a future WCDMA-based approach they call Super 3G. The Super 3G group says it will have specifications by mid2007 and working systems by 2009.
The Radio Access Network Working Group of the 3G Partnership Project 2—a consortium of 200 wireless vendors and operators, including some in the Super 3G group—has begun studying a possible standard that supports wireless downlink speeds of 100 Mbps. The 3GPP2—which develops 3G mobile systems based on the ITU’s International Mobile Telecommunications-2000 (IMT-2000) project— expects to complete its study next year.
ndustry observers predict that revenue generated by sales of handsets using various cellular technologies will reflect the increasing use of the newer approaches, as Figure 1 shows.
I
Nonetheless, the evolution toward 4G cellular technology is uncertain. According to Visant Strategies’ Fuertes, 4G systems may well be based on an aggregation of network technologies, in part reflecting today’s multiple cellular approaches. Market-based, more than technology-based, considerations will drive broadband cellular’s future, contended analyst Keith Nissen with publisher Reed Business. “In the long term,” he said, “I see wireless broadband as an alternative to cable or DSL services.” Fuertes said 4G will be attractive for many services because carriers will want to move to an all-IP system that supports the many existing IP-based applications, such as videoconferencing, gaming, and wireless Internet telephony.
Meanwhile, he noted, carriers in countries such as China, India, Japan, and Korea are talking about developing their own new 4G standards, to give them control over the technologies used in their potentially lucrative mobile-communications industries. In any event, proponents don’t anticipate commercial deployment of 4G before 2010. ■ George Lawton is a freelance technology writer based in Brisbane, California. Contact him at glawton@glawton. com.
Editor: Lee Garber, Computer,
[email protected]
IEEE International Symposium on Multimedia (ISM2005) December 12-14, 2005, Irvine, California, USA http://ISM2005.eecs.uci.edu/ Sponsored by the IEEE Computer Society in cooperation with University of California at Irvine The IEEE International Symposium on Multimedia (ISM2005) is an international forum for researchers to exchange information regarding advances in the state of the art and practice of multimedia computing, as well as to identify the emerging research topics and define the future of multimedia computing. The technical program of ISM2005 will consist of invited talks, paper presentations, and panel discussions. Submissions of high quality papers describing mature results or on-going work are invited. Topics for submission include but are not limited to: • Multimedia systems, architecture, and applications • Virtual Reality • Multimedia networking and QoS • Multimedia and multimodal user interfaces and interaction models • Peer-to-peer multimedia systems and streaming • Multimedia file systems, databases, and retrieval • Pervasive and interactive multimedia systems • Multimedia collaboration • Multimedia meta-modeling techniques and operating systems • Rich media enabled E-commerce • Architecture specification languages • Computational intelligence • Software development using multimedia techniques • Intelligent agents for multimedia content creation, distribution, • Multimedia signal processing and analysis • Multimedia tools • Internet telephony and hypermedia technologies and systems • Visualization • Multimedia security including digital watermark and encryption Submissions The written and spoken language of ISM2005 is English. Authors should submit a full paper via electronic submission to
[email protected]. Full papers must not exceed 20 pages printed using at least 11-point type and double spacing. All papers should be in Adobe portable document format (PDF) or PostScript format. The paper should have a cover page, which includes a 200-word abstract, a list of keywords, and author's phone number and e-mail address. The Conference Proceedings will be published by the IEEE Computer Society Press. A number of the papers presented at the conference will be selected for possible publications in journals. ISM2005 will also include a few tutorials, workshops and special tracks dedicated to focused interest areas. Submissions of proposals on tutorials, workshops and special tracks of emerging areas are invited. Please submit proposals to Tutorial Chairs, Workshop Chairs or Special Tracks Chair, respectively. Papers from Workshops and Focused Tracks will be presented at ISM2005, and included in the Proceedings. Important Dates • July 15, 2005 Submission of papers and proposals on workshops, special tracks and tutorials. • July 30, 2005 Notification of acceptance of workshop, special track and tutorial proposals • August 15, 2005 Notification of acceptance of papers • September 10, 2005 Camera-Ready copy of accepted papers due Conference Location. The Conference will be held at HYATT REGENCY IRVINE, 17900 Jamboree Boulevard, Irvine, CA 92614; Tel: 949-225-6662; Fax: 949-225-6769; Website: www.irvine.hyatt.com
TECHNOLOGY NEWS
Security Technologies Go Phishing David Geer
sumer confidence in online banking, noted Dunham. Banks and many other affected organizations are fighting phishing via public education. Some companies are using litigation. For example, Microsoft recently filed 117 lawsuits against people who allegedly used the company’s trademarked images to create phishing sites. However, the focus for numerous companies is on antiphishing technology.
ANTIPHISHING GROWING PAINS
N
ecessity is the mother of invention. In the computer industry, that usually means that when a problem arises, somebody figures out how to solve it and make money in the process. One of the latest computer-related problems to arise is phishing, in which e-mails lure unsuspecting victims into giving up user names, passwords, Social Security numbers, and account information after linking to counterfeit bank, credit card, and e-commerce Web sites. Organized crime frequently uses phishing, noted Ken Dunham, director of malicious-code intelligence for iDefense, an online-security company. In addition, he said, there is a black market for stolen credit card and Social Security numbers. “All fraudsters operate according to three elements: How hard is it to perpetrate? What is the risk of getting caught? What is the reward? It is really easy to do phishing, there is a very small chance of getting caught, and the reward is very high,” explained Naftali Bennett, CEO of Cyota, which provides online security for financial institutions. For the 12 months ending April 2004, said analyst Avivah Litan with Gartner Inc., a market research firm, “there were 1.8 million phishing attack victims, and the fraud incurred by phishing victims totaled $1.2 billion.”
18
Computer
Additional institutional costs, such as installing antiphishing technology and educating users, totaled about $100 million, according to George Tubin, senior analyst with the Delivery Channels Research Service at the TowerGroup, a financial-services research and consulting firm. Tubin predicted that phishing-related fraud could double this year. The Anti-Phishing Working Group (www.antiphishing.org)—a global consortium of businesses, technology firms, and law enforcement organizations—reports that the number of phishing-related e-mails is growing rapidly, increasing 28 percent from July 2004 through March 2005. So far, phishers have hit only a small percentage of financial institutions— 150 of 9,000 in the US—but the number is rising and attackers are targeting other industries, such as e-commerce, said APWG secretary general Peter Cassidy. Nonetheless, major phishing incidents have already eroded conPublished by the IEEE Computer Society
Like phishing itself, antiphishing technology is new, with many companies starting to offer services only a year ago. Most of the companies focus on only one or two antiphishing techniques. Fighting phishing frequently requires more complex approaches, though. With this in mind, Corillian, Internet Identity, NameProtect, PassMark Security, and Symantec recently formed the Anti-Fraud Alliance to offer a comprehensive set of strategies. The antiphishing market hasn’t been around long enough yet to coalesce around a few approaches. “Thus, many companies have delayed adopting antiphishing measures until the market matures or they decide that further investments in traditional fraud control measures are more cost-effective,” said Chuck Wade, a principal with the Interisle Consulting Group, a network-technology consultancy. Regulators want to encourage potential victims to use antiphishing systems. For example, the US Federal Deposit Insurance Corp. (FDIC), which insures bank deposits and investigates compliance with banking regulations, has recommended that financial institutions use such technologies. In addition, the Financial Services Technology Consortium (www.fstc. org)—a group of North American financial institutions, technology vendors, independent research organizations, and government agencies— recently completed a six-month survey
of antiphishing approaches. The group has recommended technical and operating requirements for financial institutions’ antiphishing measures.
FIGHTING PHISHING Several antiphishing approaches are becoming popular. One key to many of these approaches is having Internet service providers (ISPs) close phishing Web sites. However, this can be timeconsuming and expensive. And it can be useless to even try closing sites in countries that lack or don’t enforce antihacking laws. Meanwhile, companies whose Web sites are targeted by phishers must battle public perception that they can’t protect their customers. Retaliatory services. Several antiphishing companies offer retaliatory services. “Some security companies respond by sending phishing sites so much fake financial information that the sites can’t accept information from would-be victims,” explained Mark
Goines, chief marketing officer of vendor PassMark Security. Most phishing sites run off of Web servers installed on hijacked home computers and can’t handle much traffic. However, retaliatory services generally don’t shut down phishing sites by overwhelming them with traffic, as occurs in a denial-of-service attack. They just send the sites as much traffic as they can handle and dilute their database with largely false information, a process known as poisoning. Two-factor authentication. Banks can blunt phishing attacks by requiring online customers to use two-factor authentication, said Goines. One factor is a user ID and password, and the other is an authenticating image and phrase that the participant preselects with a bank or online vendor. As Figure 1 shows, an e-mail that a bank or online business sends to a consumer contains something the phisher couldn’t have, such as the unique image and phrase that the user chose when setting up the account, explained Goines. This proves to the user that the e-mail
Phisher
The phisher sends you an e-mail that looks like it’s from your bank.
Your bank sends you an e-mail, that you know is authentic because it contains an image and phrase that you chose.
To:
[email protected] From: The Bank Subject: Our Lowest Rates Yet!
To:
[email protected] From: The Bank Subject: Our Lowest Rates Yet!
1.9% Interest Free Until Paid! Click Here Now!
1.9% Interest Free Until Paid! Click Here Now!
Clicking the link takes you to the phisher’s Web site, which looks like your bank’s.
I left my heart
C Ommunity Bank
C Ommunity Bank
Access My Accounts
Access My Accounts
Notice - Learn more about fraudulent emails and how to protect yourself.
Notice - Learn more about fraudulent emails and how to protect yourself.
Username: DaveDave
Username: DaveDave
Password: ******* Log In
Learn More
Enroll
Your login information is captured by the phisher. You are then automatically redirected to your bank’s official site so you have no idea that your security has been compromised.
Your bank’s login screen contains the same content that you chose so you know it’s authentic.
PassMark: I left my heart
Log In Learn More Enroll
Phisher
With complete access to your account, phishers can do whatever they want with your money.
Password: *******
Only you and your banker know your PassMark; if you see your PassMark, you’re safe.
Applications Operating software Hardware ISP Network
The PassMark system also “fingerprints” the PCs you use and makes them a second factor of protection from fraud without additional hardware or software. Your bank knows it’s you. Source: PassMark Security
Figure 1. PassMark Security offers a two-factor authentication system to fight phishing.
came from the bank or business, not a phisher, and that it is safe to use the provided link. Businesses can implement the system inside either their Web site’s or PassMark’s data center. If implemented on PassMark’s data center, the application provides additional protection by asking participants, before they link to a Web site from an e-mail note, for their user IDs for the site and by identifying their computer’s IP address. The system verifies that both are correct. The application then presents customers with their preselected image and phrase along with a request for their logon password. After receiving the correct password, the system lets the customer access the Web site. The FDIC recommends that US banks adopt two-factor authentica-
tion. However, said the TowerGroup’s Tubin, few banks have required such measures out of concern that the additional effort could cause customers to avoid Internet banking. Catching phishers during their preparation. Phishers spend considerable time visiting targeted sites, or they use automated tools to download copies of Web pages. This lets them more accurately counterfeit a business’s site. Corillian, which also develops Web sites for financial institutions, monitors and evaluates suspicious traffic on customers’ sites on weekends, when most phishers conduct reconnaissance. Corillian recognizes the signs of an impending phishing attack by understanding hackers’ tools and techniques, explained Jim Maloney, Corillian’s chief security executive. June 2005
19
Te c h n o l o g y N e w s
“The Corillian Fraud Detection System (CFDS) analyzes Web logs to look for patterns of possible phishingrelated behavior,” he explained. “We have a set of rules for analyzing logs and producing suspicious-activity reports.” For example, to look more authentic, many phishing sites link to images on a targeted business’s site, presenting them along with bogus information the hackers provide on their counterfeit site. The CFDS can check entries in a bank or other business’s Web logs and identify phishers who are downloading and saving images, which is unusual behavior for a legitimate transaction. The CFDS then identifies counterfeit sites using customers’ images without authorization. At that point, Maloney said, “Corillian can notify the account holder and law enforcement.” Phishers frequently verify stolen information before selling it. When CFDS identifies a single Internet address trying to access many of a business’s online accounts, which is what would occur during the verification process, it notifies the company that the accounts may have been compromised. Registering deceptive domain names. NameProtect watches for phishers by monitoring spam from many sources, including e-mail accounts set up to bait spammers and spam feeds from ISPs. The product also checks domain-name reg-
istration records and searches the Internet for counterfeit bank sites under construction, as located by its Web crawlers. If the company’s ActiveIP tool finds a Web site that includes all or part of a customer’s trademarked name, an analyst reviews it to determine if illegal activity is taking place, explained James Blystone, NameProtect’s senior director of marketing. “We work with the client, the ISP, and the registrar to immediately close phishing sites. We can take sites down as quickly as an hour or two but typically within 24 hours,” noted Blystone. NameProtect shares phishing20
Computer
related information with the US Secret Service and FBI, said Robert Caldwell, the company’s director of business development. Monitoring chat rooms and domainname registries. MarkMonitor’s Brand Protection and Fraud Protection services track online chat rooms and Internet-address registries for information on possible phishing sites. Every day, the company tracks name registrations and name changes, said MarkMonitor CEO Mark Shull. The company also has an Identity Tracker service that searches millions of Internet address records in its whois and reverse whois directories to identify the creators or owners of counterfeit sites, Shull noted.
It appears that phishing will be a war of attacks and counterattacks. MarkMonitor uses its findings to try to convince domain-name registrars to transfer ownership of domains that include a company’s name and that are used for phishing to the victimized businesses. Antiphishing browsers. In the near future, Netscape plans to release a Web browser designed to resist phishing. Netscape is negotiating with security companies to supply the Netscape 8 beta with frequently updated blacklists of suspected phishing Web sites. “The browser also uses a whitelist of [trusted] sites,” explained Netscape spokesperson Andrew Weinstein. When a user follows an e-mail link and visits a trusted site, the browser automatically renders it. “The browser will block user access to known phishing sites,” Weinstein noted. When a user visits a site not on a whitelist or blacklist, the browser renders it with enhanced security that disables ActiveX and JavaScript capabilities, which phishers could use to exploit vulnerabilities.
Deepnet Explorer 1.4, a browser shell that uses Internet Explorer to render Web pages, analyzes Web addresses and warns users about those on a blacklist of suspect sites. Users can then choose to either stop or continue trying to access a site. “Our blacklist comes from sites collected internally, reported by our users, and aggregated from our affiliate companies,” noted Deepnet marketing manager Anneli Ritari. Tracking. Cyota monitors various accounts set up specifically to be tracked if phishers attack them. This enables companies to observe the phishing and fraud process and learn ways to fight back. Cyota tracks account activities outside a bank’s network, such as on the Internet, while banks track them within their own systems.
FUTURE THREATS Phishers are improving techniques for making counterfeit Web sites look more realistic and for convincing visitors to enter personal information on them. In addition, phishing attacks are becoming more sophisticated. For example, phishers are increasingly using instant messaging instead of email, according to the APWG’s Cassidy. In one type of attack, phishers send IM users a message with a link to a fake Web site.
Cross-site scripting “Online criminals are increasingly using cross-site scripting (XSS) flaws to inject their own code into legitimate Web pages and fool unsuspecting consumers into falling for phishing scams,” said Paul Mutton, a developer with Netcraft, an Internet services company. “Cross-site scripting vulnerabilities in Web server applications cause some pages to process JavaScript code incorrectly. This lets hackers push their own JavaScript programs, such as fake password login systems, onto legitimate Web pages,” Mutton explained. “The majority of phishing Web sites are only semi-believable, and users are
starting to see through them,” he said. “But with cross-site scripting, people are more likely to fall for the scam.” XSS can also take advantage of the session cookies frequently used when customers log onto banking or e-commerce sites. A script injected onto a site could access a cookie and send it to the phisher, who could then replicate it and log onto a victim’s account. “Companies will see more XSS threats unless they review server applications and eliminate the flaw that enables the attacks,” Mutton predicted.
Malicious code The recent Crowt.D worm demonstrated how a virus author could write code that alters an operating system’s hosts file, which contains mappings of IP addresses to host names, and redirect visitors to another site, noted Jaime Lyndon Yaneza, senior antivirus consultant with the Global Anti-Virus Research Group at Internet-security vendor Trend Micro. Attackers could use this technique to redirect users to a phishing site, according to Yaneza.
Attacking companies, not customers Rather than targeting a victim’s personal information, some phishing schemes attack individuals to gain access to valuable information in a company’s database. Phishers send an e-mail, purportedly from a company, to the firm’s customers promising new application features if they link to and log into the business’s Web site and enter their account name and password. However, the site they link to is counterfeit. Attackers use the account name and password to enter the company’s real site and hack their way to network drives and other network resources, administrative logon information, additional online accounts, and sensitive data such as credit card and e-commerce logon data, according to iDefense’s Dunham. Dunham said phishers use similar schemes to target a company’s employ-
ees and steal corporate intranet logon information. This would let attackers enter a company’s internal network, steal confidential material, and cause other problems.
ccording to the Tower Group’s Tubin, antiphishing products, adopted primarily by larger nationwide or international banks, appear to be effective because phishers recently have started hitting smaller, regional financial institutions that are less likely to use the technology. Despite the technology’s early success, MarkMonitor’s Shull stated, potential victims must remain vigilant. “The Internet benefits everyone,” he
A
explained. “It enables enterprises to reach new customers, and it enables criminals to do the same.” As for the future, the APWG’s Cassidy said phishing will be a war of escalation, attacks, and counterattacks. Concluded Bennett, “Phishing is evolving, so solutions need to evolve, too.” ■ David Geer is a freelance technology writer based in Ashtabula, Ohio. Contact him at
[email protected].
Editor: Lee Garber, Computer,
[email protected]
SCHOLARSHIP MONEY FOR STUDENT LEADERS Student members active inIEEE Computer Society chapters are eligible for the Richard E. Merwin Student Scholarship. Up to ten $4,000 scholarships are available.
Application deadline: 31 May
Investing in Students www.computer.org/students/ June 2005
21
NEWS BRIEFS
Researchers Develop Memory-Improving Materials esearchers are using alloys in a long-studied but not previously commercialized technique that promises to make computer memory even smaller and more energy efficient. This is part of the industry’s ongoing attempt to avoid silicon’s limitations and devise faster chips, including those used for memory, that use less space and power. Both Intel and Philips Research are working on phase-change memory, a form of nonvolatile memory—which keeps stored data even after power is
R
Data storage region Polycrystalline Amorphous Chalcogenide (phase-change material)
Heater
Resistive electrode
Source: Intel
In phase-change memory, electrical current at different levels flowing through a resistive element heats part of a chalcogenide-alloy area, turning the material either crystalline or amorphous. The amorphous state offers more electrical resistance than the crystalline. This lets phase-change systems read binary data’s ones and zeros by detecting the difference in resistance.
22
Computer
turned off—that researchers have studied since the 1970s. Phase-change memory, also called ovonics, records data by changing a medium between two physical states to represent binary data’s ones and zeros. The systems typically record data by using rapid heating at different levels to leave a memory cell’s chalcogenidealloy molecules in either a disorganized, amorphous state or a crystalline state. Chalcogens are a set of elements that include oxygen, selenium, sulfur, or tellurium, which is frequently used in CDs and other types of optical storage. Chalcogenide’s amorphous state offers considerably more electrical resistance than the crystalline state. Phase-change systems read data by detecting the difference in resistance. A principal research challenge has been changing a medium’s physical state in one memory cell very quickly without using large quantities of energy that could also improperly alter nearby cells. Researchers have thus focused on finding materials that avoid this problem. Philips has developed a material—an antimony-tellurium alloy doped with one or more of the following: germanium, indium, silver, and gallium—that avoids the problem because its physical state can change via short, low-voltage heat bursts, noted Karen Attenborough, leader of Philips Research’s Scalable Unified Memory project. Another advantage of the material is that manufacturers could layer it on silicon and build a memory device like a chip using existing semiconductor Published by the IEEE Computer Society
fabrication processes. Moreover, Philips’ phase-change line systems could be much smaller than other nonvolatile memory technologies, such as flash and magnetoresistive RAM. On smaller flash or MRAM chips, the oxide layer on the gates is thinner, and the charges it must maintain to store data could eventually leak, Attenborough explained. This isn’t a problem with phase-change memory, which isn’t based on electronics and doesn’t use gates. Because phase-change memory doesn’t require electromagnetic technology, it would be particularly useful in high-radiation environments such as space and nuclear-power plants, said analyst Carl D. Howe with Blackfriars Communications, a market research firm. Radiation can wipe out the charges that other memory types store inside a silicon-dioxide shell, according to Howe. Much more radiation would be required to alter the state of the material used in phase-change memory, he explained. It remains to be seen whether phasechange technology will be sufficiently inexpensive and widely available once commercialized for marketplace success, noted Intel Fellow Gregory Atwood. It also remains to be seen whether many users will buy a new technology they don’t fully understand. “The primary challenge for the technology is moving it toward high-volume manufacturing,” Atwood added. “While there are plenty of development challenges, we currently see no fundamental technical limitations.” ■
Using Computers to Spot Art Fraud Dartmouth College team has developed a set of software tools that promise to help identify art forgeries, a high-stakes crime that can cost victims millions of dollars. Computer science professor Hany Farid and mathematics and computer science professor Daniel Rockmore have designed tools that perform mathematical and statistical analyses that identify spatial relationships among an artist’s pen and brush strokes, as well as their intensity and density. These analyses yield mathematical patterns from a painting or drawing that can be compared to those derived from an artist’s other works. This can help identify whether, for example, a painting was actually done by its purported artist or whether several people worked on a single painting previously credited to just one person. The process begins with a large photograph of a painting or drawing. The Dartmouth system then digitizes the photograph and divides it into many small sections for detailed, localized analysis, Rockmore explained. The system then runs the digitized sections through multiple filters, each of which removes part of the image, such as vertical lines. This lets the analysis focus on various facets of the image. Dartmouth’s approach mathematically describes the artist’s style based on wavelet statistics. This technique works with wavelet mathematical functions—also used in image compression—to identify patterns within paintings or drawings. Rockmore said his system’s wavelet functions can recognize even the small differences that can occur when someone tries to imitate another artist. As a test, the researchers examined a large oil painting attributed to Renaissance painter Pietro Perugino, well known for his frescoes on the walls of Vatican City’s Sistine Chapel. The system determined that four artists actually worked on the printing.
A
Rockmore noted that examining paintings can be a problem because materials, finishing, retouching, and conservation can distort the analysis. He said it makes more sense to apply the technique to drawings, which offer a stronger direct link from hand to paper than paintings. Art experts say the Dartmouth tools aren’t about to replace their own expertise, resulting from years of training and experience. Museum curators have also expressed skepticism and caution that the techniques are new and relatively untested. They say it is one thing for software to confirm fraud that experts have already identified and another to make an original finding.
Laurence Kanter, curator in charge of the Robert Lehman collection at New York City’s Metropolitan Museum of Art, said “This is an experimental attempt that is very good and well intentioned.” However, he added, the system doesn’t have data from a large collection of paintings and drawings to use for making meaningful analyses. Nonetheless, Kanter said, it could provide experts with another tool for authentication. The Dartmouth researchers say their current efforts are only in their initial stages and will require better mathematical tools. “It’s really too preliminary to say with any assurance that it’s going to be useful,” explained Rockmore. ■
New Product Lets Users Paint on Wireless Security A company has developed an interior paint designed to keep electronic eavesdroppers from intercepting wireless communications. Wireless-security vendor Force Field Wireless developed DefendAir paint and paint additive, both laced with copper and aluminum fibers that form an electromagnetic shield and block most radio waves from leaving a room. This protects wireless networks from malicious war drivers. In war driving, eavesdroppers with specially equipped laptops can intercept data from a user’s wireless network or tap into Internet connections from up to a mile away. This technique exploits unprotected wireless networks whose transmission ranges extend beyond the buildings in which they are based. One coat of the water-based DefendAir shields Wi-Fi (IEEE 802.11), WiMax (IEEE 802.16), and Bluetooth radio-based networks operating at frequencies ranging from 100 MHz to 2.4 GHz, according to Mike Taylor, Force Field Wireless’ senior product manager. Two or more additional coats would protect networks, including those based on ultrawideband technology, operating at up to 5 GHz, he noted. DefendAir’s dense copper and aluminum fibers absorb, reflect, and refract radio waves, thereby keeping them from passing through, Taylor explained. However, DefendAir could also block normal radio, TV, and cellular-phone reception. Consumers could avoid problems by using cable TV or wireline telephones, Taylor noted. He said DefendAir adds a much-needed layer of wireless protection, particularly because the Wired Equivalent Privacy wireless-security protocol has flaws. DefendAir costs $69.95 per gallon or $34.95 per quart. Traditional waterbased house paint typically costs between $15 and $35 per gallon. DefendAir is available only in gray, which is suitable for a primer coat. Users could add pigments or mix the DefendAir additive with standard interior paint. ■
June 2005
23
News Briefs
Chilling Out PCs with Liquid-Filled Heat Sinks urdue University researchers are testing tiny, quiet, inexpensive heat sinks—consisting of channels filled with a circulating refrigerant—that could cool computer components. This technology could help solve the growing problem of economically removing heat from today’s computers. As microprocessors have gotten more powerful, heat buildup within computers has increased. Also, manufacturers continue to pack PC cases with more components, leaving less room for heat removal equipment. And existing high-powered cooling systems, including those that use liquids, are large, noisy, and expensive. Purdue’s microchannel refrigeration heat sink consists of channels 300
P
microns in diameter that circulate R134A, the same coolant found in modern household air-conditioning systems. The coolant absorbs and carries away heat, a process that turns the liquid into a gas. The gas then passes through a tiny compressor, which raises the vapor’s pressure. This forces the gas through a condenser, causing the vapor to release the heat it has collected and reliquify so that it can recirculate through the heat sink. Under optimal conditions, the current version of this device has achieved up to 27,000 watts of cooling per square centimeter of heat sink. This is far above the target level of 100 watts per square centimeter and more than enough to cool a PC, noted project
leader Issam Mudawar, a Purdue professor of mechanical engineering. The current prototype heat sink occupies a base area of about 1 square inch, which would let PC manufacturers place several in tight spaces throughout a computer. Manufacturers could modify the sink’s size to fit almost any host device, noted Mudawar. He estimated each sink would add $5 to a computer’s cost. The refrigerant would increase the price a bit more. The new sink is an example of the growing field of microfluidics, in which liquid circulates through a chip or other device to enable cooling, genetic analysis, or many other applications across a range of disciplines. According to Mudawar, in addition to developing a useful cooling device, his research also is revealing new information about microfluidics, which so far has focused largely on devices considerably larger than his heat sink. Kevin Krewell, editor of the Microprocessor Report newsletter, said the unique aspect of the Purdue project is the heat sink’s size. Small, quiet cooling is needed for applications such as game consoles, set-top boxes, and rack servers, as well as laptops, Krewell noted. Mudawar said one challenge to commercializing his heat sink has been developing tiny compressors. Nonetheless, he noted, his team has already been contacted by many chip makers and is considering spinning off a Purdue-based company to commercialize the technology. ■ News Briefs written by Linda Dailey Paulson, a freelance technology writer based in Ventura, California. Contact her at
[email protected].
Purdue University scientists have developed a small, inexpensive computer heat sink that consists of tiny channels filled with a circulating refrigerant. The sink sits in a housing that helps route coolant into and out of the channels. Source: Professor Issam Mudawar, Purdue University.
24
Computer
Editor: Lee Garber, Computer;
[email protected]
PERSPECTIVES
Eric Brewer Michael Demmer Bowei Du Melissa Ho Matthew Kam Sergiu Nedevschi Joyojeet Pal Rabin Patra Sonesh Surana University of California at Berkeley
Kevin Fall Intel Research Berkeley
http://tier.cs.berkeley.edu
I hope the industry will broaden its horizon and bring more of its remarkable dynamism and innovation to the developing world. —Kofi Annan, United Nations, 2002
The Case for Technology in Developing Regions
A
mong the broad set of top-down Millennium Development Goals that the United Nations established in 2000 (http://www.un. org/millenniumgoals), one stands out: “Make available the benefits of new technologies—especially information and communications technologies.” Alongside good governance, technology is considered among the greatest enablers for improved quality of life. However, the majority of its benefits have been concentrated in industrialized nations and therefore limited to a fraction of the world’s population. We believe that technology has a large role to play in developing regions, that “First World” technology to date has been a poor fit in these areas, and that there is thus a need for technology research for developing regions. Despite the relative infancy of technology studies in developing regions, anecdotal evidence suggests that access to technology has a beneficial economic impact. Cellular telephony is probably the most visible application, but there are many others, some of which we cover in this article. The World Bank’s infoDev site catalogs hundreds of information and communications technologies (ICT) projects (http://www.infodev.org), albeit not all successful. Most of these projects use existing off-the-shelf technology designed for the industrialized world. Although it is clear that there are large differences in assumptions related to cost, power, and usage, there has been little work on how technology needs in developing regions differ from those of industrialized nations. We argue that Western market forces will continue to meet the needs of developing regions accidentally at best.
ICT RESEARCH FOR UNDERSERVED REGIONS Evidence from the development of other technologies, such as water pumps and cooking stoves, demonstrates widespread impact from research.1, 2 Novel ICT has the potential for great impact in a variety of fields ranging from healthcare to education to economic efficiency. However, we do not propose that ICT offers a panacea for the complex problems facing nations on the path to economic development. On the contrary, at best, ICT can enable new solutions only when applied with a broad understanding and a multidisciplinary approach. 0018-9162/05/$20.00 © 2005 IEEE
Published by the IEEE Computer Society
June 2005
25
Figure 1. Schoolchildren in Uttar Pradesh, India, hooking up 220 V AC power by hand to power their PC. They connect and disconnect the power every day— “borrowing” the power from a neighbor.
Any deployment must deal with complex social issues such as underlying gender and ethnic inequalities, as well as existing players on which ICT might have a negative impact. Our strategy on this front is to work closely with social scientists and to partner with strong government or nongovernmental organizations (NGOs), which tend to understand local needs and dynamics in a way that is not possible from afar. Our research focuses mainly on regions where most of the population makes less than US$2,000 per year. (Income numbers are normalized for equivalent purchasing power.) The US average income per year is US$37,800;3 about 65 percent of the world population is below this line, and 80 percent of that number make less than US$4,500. Progress in India and China in the past few decades has actually improved these numbers overall, but Africa has fallen further behind in per-capita income and thus remains the region that poses the greatest developmental challenges. Finally, although we have studied ICT in many regions, our direct experience has focused on India, Brazil, Bangladesh, and Uganda, which leads to an inherent bias toward the issues of these countries.
Why now? Three things make this the right time for deploying ICT in developing regions: the impact of Moore’s law, the increased availability of wireless communications, and the emergence of a more supportive business environment. The impact of Moore’s law has decreased the cost of computing to fractions of a cent per user. Unfortunately, this cost applies only to the shared infrastructure, not to personal devices. None26
Computer
theless, with a focus on sharing, the cost of computing and storage becomes realistic even for the poorest users. Second, the high-volume production of wireless communications, particularly cellular and Wi-Fi, has decreased their costs as well. For example, the majority of villages in Bangladesh, almost entirely without telephony a decade ago, now have shared cell phones. For rural areas, a wireless infrastructure appears to be the first kind of infrastructure that is affordable. We hypothesize that a successful wireless infrastructure could lead to sufficient increases in rural incomes to make other infrastructure investments viable, such as water and power distribution. Third, the diffusion of technology worldwide and the growing access to capital have created a favorable environment for entrepreneurship and experimentation, as discussed below. This supportive business environment, combined with the success of franchising as a way to deploy large-scale ICT projects, means that there is a viable path from research to a large-scale impact in the real world.
What should we do? ICT projects have four main technology needs: connectivity, low-cost devices, appropriate user interfaces (UI), and power. Rural connectivity is a challenging prerequisite in most ICT projects. Although wireless has broad use in urban areas, most rural areas are without coverage. The low population density in rural areas (even in the US) limits the ability to deploy base stations profitably. Intermittent—or delay-tolerant—networking requires developing and supporting applications that function without a connected end-to-end networking path. By moving away from the always-on model of traditional voice and IP networks, we can address fundamental disruptions such as intermittent power, extend connectivity via “sneaker” nets, and offer a generally lower-cost infrastructure. Low-cost devices would enable many applications. The best avenues seem to be enhancing cell phones, enabling sharing, and reducing the basic cost of screens, batteries, and boards. The generally complex and task-specific nature of modern PC technology creates many UI issues— from basics such as keyboard, font, and multilanguage problems to deeper conceptual issues such as the need for speech interfaces for users with limited literacy. Finally, the availability and quality of electric power form basic hurdles for ICT deployments.
ECONOMIC ISSUES Economic issues have a significant impact on the opportunities for ICT research in developing regions. Three factors have an impact on providing access to ICT in these areas: investment capital, franchising, and shared technology.
Investment capital Because it is fundamental to applying innovation, the absence of investment capital hinders advancements in developing regions. In capitaldeprived areas, the sole asset for leverage is often family land, which is unfortunately locked up as “dead capital” because there is no way to prove ownership. Inspired by the work of researchers like Hernando De Soto,4 many governments are deploying ICT-based land record systems to better utilize these assets. A revolution in this space has been uncollateralized loans, known as microcredit, advanced by Grameen Bank in Bangladesh and FINCA International in Bolivia (http://www.villagebanking.org). Grameen Bank, founded in 1975 by Bangladeshi economist Muhammad Yunus,5 loaned out an astonishing US$425 million last year in tiny amounts (US$50-$500), generally without collateral, and has a recovery rate approaching 99 percent (http://www. grameen-info.org/bank/GBGlance.htm). In a typical scenario, a Bangladeshi villager uses a microcredit loan for assets such as livestock and then repays the loan with profits from dairy products. These entrepreneurs often expand their business from a single goat to small herds and even become employers—in fact, most loans go to existing businesses. A remarkable 47 percent of loan recipients eventually cross the poverty line, making microcredit one of the most effective tools against poverty.
Franchising An important outcome of microcredit is that it enables franchising. Traditional franchising—for example, a fast-food business—is a way to scale quickly using the franchisees’ capital. In developing regions, microcredit becomes the source of capital for franchisees. This is uniquely beneficial in areas where the demand for certain services exists, but an effective capital flow does not. Microcredit creates a two-way street: Motivated entrepreneurs with local knowledge have more opportunities, and microcredit lenders can leverage the franchise brand and training for better returns. The flagship of this model is Grameen Telecom (http://www.grameen-info.org/grameen/gtelecom), which grew out of Grameen Bank. Franchisees use
microcredit to buy a cell phone and then operate it as a pay phone for their neighbors. This The focus must model has two big benefits: scalability and susremain on shared tainability. Using this model, GT now has 95,000 technology, franchisees covering more than 50,000 of the particularly 68,000 villages in Bangladesh, and serving community kiosks, some 60 million people. On average, phone schools, and operators earn more than twice the average per capita rural income. GT revenues per shared cell phones. rural phone are double that of urban phones.6 Since scalability is a fundamental requirement of many projects, we believe the franchising model is a crucial part of many solutions. Second, franchisees are uniquely good at keeping the system operational. Any ICT project will have unforeseen glitches; the difference here is that someone’s livelihood depends on resolving these glitches. Franchisees form the “human in the loop” that can add a significant level of robustness to any system. This is in stark contrast to most (but not all) donated-computer projects, which have no viable plan for ongoing maintenance and support.
Shared technology Although personal devices are the norm for developed nations, they are unlikely to succeed in areas where most of the population is below the poverty line. Indeed, most successful projects to date, including Grameen Telecom, depend on shared devices. Given that even cell phones are shared, with a cost approaching US$40, it is unlikely that personal computers make sense— even a US$100 laptop would cost too much and use too much power. For example, both the Indian Simputer and the Brazilian Computador Popular did not succeed as personal devices for the poor.7 Moreover, Moore’s law will not solve this problem because increasing the number of transistors per chip helps functionality but not cost. CPUs are already a tiny fraction of overall system cost and are dwarfed by the costs of packaging, discrete components, batteries, and screens. Fortunately, most rural areas have a history of shared technology, including tractors and pay phones. We thus believe that the focus must remain on shared technology, particularly community kiosks, schools, and shared cell phones. There is some room for personal devices, but primarily for specialists such as health workers.
IMPACT AND OPPORTUNITIES Although environmental and cultural concerns require consideration, the long-term impact of June 2005
27
technology depends on its economic sustainability. The development landscape is litAdequate tered with failed ICT pilots based purely on healthcare short-term charity. Projects can be sustainis one of the able either because they serve a public good greatest needs and have ongoing support—for example, through taxation—or because they are at in developing least self-sustaining financially. regions. Our research covers both kinds of projects in five areas: health, education, disaster management, e-government, and economic efficiency.
Healthcare Adequate healthcare is one of the greatest needs in developing regions, which remain home to the vast majority of preventable diseases, including 96 percent of malaria, 95 percent of HIV/AIDS, and 90 percent of tuberculosis (http://www. theglobalfund.org/en). Child mortality rates are also high: 10 percent of children under age five die compared with one in 143 in high-income nations (http://www.developmentgoals.org/Child_ Mortality.htm). Although it may be difficult for ICT to have an impact on certain health issues like malnutrition, it can directly impact some areas including disease control, telemedicine, improving doctors’ efficiency, offering low-cost diagnostics, improving data collection, and providing patient management tools. ICT has played a role in some healthcare success stories and other research opportunities are available. River blindness. ICT has been pivotal in controlling river blindness in West Africa. Hydrological sensors located along 50,000 kilometers of rivers in 11 countries collect data and transmit it via satellite to entomologists, who use forecasting software to compute the optimal time, dosage, and area over which to spray larvicide aerially to destroy the disease-spreading blackfly.8 This endeavor protects 30 million people and has also reclaimed 100,000 square miles of farmland.9 Child mortality. At a cost of less than US$2 per capita, the Tanzania Essential Health Initiatives Program conducted periodic computer analyses of prevailing cause-of-death data collected from rural households by health workers in two Tanzanian districts.10 Results indicated that only a fraction of the budget was being spent on the actual killer diseases, which led to acute shortages of key medicines. With 80 percent of the people dying at home and leaving no records, this insight was previously hidden. Strategically retooling the budget in response to 28
Computer
timely reports on prevailing disease characteristics led to more effective care and dramatically reduced child mortality by an average of 40 percent in five years—from 16 percent to less than 9 percent for children under age five. Aravind Eye Hospitals. The mission of this network of five self-sustaining hospitals in Tamil Nadu, India, is to sustainably eradicate blindness. Aravind, which treated nearly two million patients and performed more than 200,000 eye surgeries in 2004,11 emphasizes using existing technology, even locally manufacturing its own intraocular lenses. A cataract surgery costs only about US$10; although about half the patients are treated for free, the practice is still profitable. The network uses a VSAT satellite connection at 384 Kbps to videoconference among hospitals and also to communicate with a mobile van conducting “eye camps” in rural areas. Our experience working with Aravind has revealed several research opportunities. Telemedicine. Existing telephony networks do not provide adequate support for videoconferencing with centrally located doctors to enable remote consultation and health worker training. Telemedicine—using telecommunications for the remote diagnosis and treatment of patients— requires designing low-cost, low-power, long-range, and high-bandwidth wireless technology. Computer-assisted diagnosis. There is a growing shortage of health workers, with the poorest regions hurt most (http://www.globalhealthtrust. org). To address this problem, in addition to other recommendations, there is an urgent need to automate simple diagnostic tasks. Software such as image-recognition tools or decision-support systems and hardware such as the ImmunoSensor chip (http://www.coe.berkeley.edu/labnotes/1003/boser. html) can reduce the strain on overburdened health staff, improve efficiency, and mitigate the lack of infrastructure such as labs. We are currently testing a prototype that diagnoses diabetic retinopathy from digital retina images. Without automation, doctors spend about five minutes per diagnosis, even though the surgery itself only takes 15 minutes. Assuming 90 percent accuracy, which we are approaching, partially automated diagnosis would result in an annual savings of thousands of hours per hospital, which could dramatically increase the number of patients receiving surgery. Field-worker support. With low utilization of healthcare services and poor quality of health data, many organizations are sending semitrained health workers into rural areas. These workers typically
are equipped with handheld devices to reduce errors and time for data transcription. Although pilot projects show promising results (http://www.healthnet.org/usersurvey.php; http:// www.healthnet.org/coststudy.php), there are problems due to frequent power outages (http://www. opt-init.org/framework.html) and poor telecommunications. This is a strong motivator for work in developing low-cost intermittent networking devices.
Education According to a UNESCO study of global education, only 37 of 155 countries have achieved a universal primary education.12 In most high-income countries, students are expected to attend school for at least 11 years, but only two out of the 37 lowincome countries offer the same level of education. In recent years, several attempts have been made to integrate ICT into rural and low-income urban schooling. By combining technology with sound educational principles and teaching practices, many of these initiatives have demonstrated increased learning. Azim Premji Foundation. Since 2001, electronic courseware has been developed in local languages to aid classroom learning in rural schools in the states of Karnataka and Andhra Pradesh, India. Students use shared computers and work in small teams to run the courseware, and the content is designed so that students work through the material in a self-paced manner and via discussions with one another. A randomized experiment involving 2,933 students from 32 schools in both states showed statistically significant improvements on learning achievement tests for students in Andhra Pradesh but not in Karnataka (http://www. azimpremjifoundation.org/downloads/ImpactofC ALonLearningAchievements.pdf). One explanation offered by the investigators is that teachers were not part of the computer-aided learning process in the Karnataka pilot. In Andhra Pradesh, teachers accompanied the students to the telecenters, but students in Karnataka were presumably responsible for using the courseware by themselves. As of March 2005, this initiative covered more than 10,200 schools. Digdarshan. In 2000, researchers placed a solarpowered computer in a rural school in Uttar Pradesh, India, and developed Hindi CD-ROM courseware for it. Students used the computer in small groups to help one another understand the material, and enjoyed the interactive quizzes to the
extent that they persisted in retaking the exams until they attained full scores; in the Frequent power process, they helped one another to underoutages and poor stand why their initial answers were incortelecommunications rect. More importantly, given the shortage of qualified teachers, the courseware fills in are strong gaps in the teachers’ knowledge and aids motivators for work senior students in coaching their juniors. in developing Finally, parents became more willing to send low-cost their children to school because they were intermittent aware of the importance of IT literacy. As of April 2005, Digdarshan has reached networking devices. 700 schools with 700 more in progress. Based on these examples, we have identified several research opportunities in this area. Digital story authoring tools. We found that children are highly motivated to create multimedia digital stories that impress fellow students. Creating these stories fosters active learning because the authors must explain academic concepts while they are developing their writing and communication skills. However, current authoring tools such as Microsoft PowerPoint are too complicated, and simpler tools like KidPix are not necessarily culturally appropriate. Tools can be localized by evoking cultural icons and mythology such as the Amar Chitra Katha comics in India. For example, with the Suchik project research prototype (http://www.iitk.ac.in/ MLAsia/suchik.htm), students create comic stories with user-defined characters. We envision authoring tools that children can use to create interactive stories and games. Local content repository. To the extent that local content exists, its limited availability often restricts its impact. We envision a Web-accessible repository where students and teachers can store and retrieve the digital stories, games, and other electronic content that they author. Access to an audience via such a repository is expected to motivate students to create high-quality work. This repository can also act as a digital library of resources, such as background articles, raw data, and clip art that students can use when researching and authoring content. Finally, this repository can manage content created by experienced teachers or by the well-educated diaspora living abroad, which other teachers can use and even customize. The project requires only intermittent networking for each school.
Disaster management The Indian Ocean tsunami disaster in December 2004 was a tragic reminder of the urgent need for better disaster warning and relief systems. ICT has June 2005
29
a major role to play in both areas, but had little impact when the tsunami occurred. The Software is a key focus in the media has been on tsunami deteccomponent for tion, but an effective disaster notification addressing the mechanism, such as the early warning system problem of for cyclones in Bangladesh,13 is arguably more important and is an application in matching donations which ICT can directly save lives. to local needs. Small victories. The MS Swaminathan Research Foundation runs a communications network in rural areas of Pondicherry in southern India through a web of “information villages” networked through wireless connections. The MSSRF network normally is used to provide communications, weather forecasts, wave and fish location patterns, and other similar services to coastal and inland villages. On the day of the tsunami, Nallavadu, a village of 3,600 residents, learned of the tsunami via phone and broadcast a warning over a public address system, and no lives were lost. A second village, Veerampattinam, used a similar broadcast after the first wave hit, and lost only one life out of 6,300. Later, these two villages used the databases in their knowledge centers to organize relief measures and distribute aid. MSSRF is now endeavoring to install similar systems and knowledge centers in neighboring coastal villages (http://www.mssrf.org/notice_board/ announcements/tsunami/tidal_tragedy.htm). Fast-deployment networks. Rural wireless coverage becomes particularly important after a disaster. Although cellular coverage would be ideal, even intermittent networking using a range of technologies, including “sneaker net” and satellite communications, would be helpful. For example, the Grameen Telecom village cell phones were extremely helpful in 2004 when the worst flood in modern history covered 60 percent of Bangladesh and left 30 million homeless.14 However, many disaster-prone areas are too rural to have existing cellular coverage. Relief logistics. Matching donations to local needs is consistently a problem. Following the recent tsunami, there were several instances of wasted donations. For example, donors kept sending clothes, while disaster areas did not receive muchneeded rice and shoes. Software is already a key component for addressing these problems. One example is the Fritz Institute’s Humanitarian Logistics Software, a Webbased application for automating the logistics behind mobilization, procurement, distribution, and finance processes (http://www.fritzinstitute.org). 30
Computer
Another organization, Auroville, is addressing these issues independently by establishing Tsunami Rehabilitation Knowledge Centers with the express purpose of facilitating the flow of information between organizations and villages and providing better coordination among the NGOs (http://www. auroville.org/tsunami/projects.htm). Relatively simple shared Web sites, even with intermittent connectivity, are central to the strategy for this collaborative effort.
E-government The case for technology in e-governance encompasses three broad application areas: public information, digitization of records, and transactions involving the state. There is evidence that e-government services offer significant benefits to users15 and that developing countries are embracing the Internet as a citizen-government interface.16 However, many areas—including sub-Saharan nations and parts of Asia—lack both online services and basic computerization of processes within the government. Friends network and Akshaya. A study of the Fast Reliable Instant Efficient Network for Disbursement of Services bill-payment network (http:// www.keralaitmission.org/content/egovernance), which offers one window that replaces more than 1,000 forms and service requests at district headquarters, shows that even in the most-rural areas more than 95 percent of people prefer online transactions. The Friends study indicated that singlecounter e-government kiosks could handle an average of 400 transactions daily, whereas a human operator could perform only a fraction of that service. The Friends network represents savings not only to the government agency deploying the service but also directly to users, both in terms of money and time saved per month. There is also evidence that e-government services, even when run by private entrepreneurs on behalf of the government, can still be sustainable (http://www.drishtee.com). The Akshaya project (http://www.akshaya.net) in northern Kerala extended the Friends program by creating a Web-based interface. Currently, more than 100 e-governance kiosks are providing e-payment services. These online services are established with the assistance of local government bodies and are run by entrepreneurs who charge small fees for paying bills. The entrepreneurs in turn maintain accounts with the state and pay online after accepting cash payments. Concurrent with this program, the state trained at least one person in each household in basic com-
puter usage with the aim of eventually enabling people to use e-government services on their own. The e-pay services are among the most successful in the state, and several entrepreneurs have raised their credit limits to handle the high demand.17 Bhoomi. The Bhoomi land records project (http://www.bhoomi.kar.nic.in) in Karnataka, India deals with the problem of consolidation of multiple forms of land records, accumulated over decades or even centuries, and representing different systems of land records under various regimes throughout history. Bhoomi seeks to digitize these records into one consistent format. The state government digitized more than 20 million land records held by 6.7 million citizens in more than 2,700 villages in the state. The digitization included visits by land surveyors, several levels of checks and balances to ensure veracity of land claims, and the creation of a fingerprint authentication system that integrates all current transactions by village accountants. A secure database at the back end is meant to ensure that the land data thus gathered remains reasonably protected from the machinations of local officials, who often are accused of harassment and extortion of marginal villagers. A touch screen for land record verification is kept in a public place to ensure that villagers can check on the status of their land records. Printing records takes about two minutes and avoids a complex application system that previously took up to 30 days. Bhoomi has some important long-term prospects. The state government currently is converting the data into a geographic information system format. Banks already use Bhoomi data for farm credit, a process that currently takes fewer than five days compared to one month in the past. Banks also use the records to estimate farm credit requirements and asset leverage. The previous shortage of such information was an important factor that limited access to capital in rural areas. Finally, land-use planners can use the system to get a better idea of crop patterns.
Economic efficiency Economic efficiency depends on information and communication. Without an ICT infrastructure, communication relies on physical contacts that are expensive in terms of both time and money. As in industrialized nations, communications networks can lead to more efficient and transparent markets. ICT can facilitate improvements in communications, fair-market price discovery, and supply-chain management. Grameen Telecom. Spawned by Grameen Bank, GT’s goal is to provide affordable telecommuni-
cations access to rural villages in Bangladesh. The basic GT village phone model consists of The only franchisees who offer telephony using a cell alternative form of phone and charger leased from the company. communication was Each franchisee earns revenue by reselling via travel or courier, phone minutes, which they buy from GT at a flat rate. Wealthier urban customers subwhich was two to sidize the base stations. eight times more Surveys show that 50 percent of GT phone expensive. use is economic in nature.14 Survey responses also indicated that in half the cases, the only alternative form of communication was via travel or courier, which was two to eight times more expensive and required 2.7 hours on average.18 Enhanced communications access has improved economic transparency in the GT phone villages. For example, a chicken egg farmer found that by calling the market for the fair price of eggs, she can negotiate with middlemen for a better offer. Before she had access to the phone, the farmer had little negotiating power.19 In addition to market pricing, villages also use the phone to guarantee full delivery of remittances from relatives working abroad, which accounted for 4.5 percent of Bangladesh’s GDP in 2001.20 The villagers use the phone in two ways. First, relatives abroad can call and verify that the correct amount of money was received. Second, by using the phone to obtain foreign exchange rates, the villagers can negotiate fair rates. GT benefited to some extent from the high population density of Bangladesh, which enables profitable base stations. Lower-density areas, which include most rural regions, require new wireless technology. In addition, many of these economic activities require only intermittent networking. e-Choupal. ITC’s e-Choupal initiative (http://www. echoupal.com) is a computer kiosk deployed to create an efficient agricultural supply chain. Traditionally, farmers sold their crops to ITC via bidders at mandis—aggregated market yards. This scheme had several inherent inefficiencies: insufficient knowledge of current market prices, no differentiation of price with respect to crop quality, and high transportation and handling costs. In 2000, ITC began building information kiosks in the Indian state of Madhya Pradesh to provide farmers a direct market for their crops. Each kiosk is equipped with a PC, a printer, and power backup, and the network connection is provided via phone or VSAT.21 The kiosks also have automatic moisture analyzers that measure the quality of crop samples. ITC subsidizes the equipment, and a local farmer houses the kiosk. June 2005
31
The farmers use the kiosks for informational and commercial tasks. The e-Choupal A consistent theme Web portal provides farmers with market in ICT research prices, weather predictions, and educational is the need material on agricultural practices. Because the quality of the crop can be measured locally, for network the farmers immediately receive payment for infrastructure. their crops from the kiosk. Retail companies also can use the kiosks to sell their agricultural products directly to the farmers. As a side effect, NGOs have used the networking infrastructure to pursue development work.21 The e-Choupal project has expanded from Madhya Pradesh to provide a total of 4,200 kiosks covering 29,500 villages in six Indian states. An open question is how to leverage kiosks driven by supply chains to provide a broader array of services.
EARLY RESEARCH AGENDA The presence of a communication and power infrastructure, appropriate user interfaces, and access to inexpensive computing devices appear to be areas ripe for innovation.
Networking infrastructure A consistent theme in ICT research is the need for network infrastructure. Unfortunately, most current networking technologies are still not available for developing countries. For example, the typical cost for a telephone landline is US$400, which is acceptable in the US, where 90 percent of households can afford to pay $30 a month for telephone service. In contrast, in India, more than 60 percent of the population can afford at most US$5 a month for communications. Moreover, communication technologies are even less practical for rural areas. In a typical network, more than 70 percent of the cost is in the access network, not in the backbone. This constitutes an important limitation for rural deployment, since user density and consumer paying capacity are low. Thus, the recent growth of cellular operators has been an urban phenomenon. Even though the majority of the developing world population lives in rural areas—for example, 74 percent in India (http://www.censusindia.net)—most rural areas remain uncovered (http://www.auroville.org). Rural wireless. To help alleviate this situation, we have begun to research cost-effective technologies for access networks in sparsely populated rural areas. Among the available technologies, wireline networks are unaffordable. Although satellite-based systems such as VSAT can cover remote areas, they are prohibitive in terms of initial and recurring costs; 32
Computer
cellular technology is designed for high population densities and typically does not have long range. Three technologies look promising: modified Wi-Fi, CDMA450, and 802.16. We believe that the most promising solution to extend coverage to rural areas is a mixture of pointto-point and point-to-multipoint wireless technologies. These technologies could form a backbone that provides connectivity down to the village level, combined with a Wi-Fi hot spot or a cellular base station for access to individual households. Our technological focus has primarily been on adapting the IEEE 802.11 (Wi-Fi) family of technologies (http://grouper.ieee.org/groups/802/11) to provide the backbone network. The 802.11 standard enjoys widespread acceptance, has huge production volumes, with chipsets costing as low as US$5, and can deliver bandwidth of up to 54 Mbps. Furthermore, point-to-point links using 802.11 high-gain directional antennas can span impressive distances up to 88.7 km (http://www. wired.com/news/culture/0,1284,64440,00.html). Despite its attractive features, the 802.11 Media Access Control protocol was designed primarily for short-distance, broadcast media with tens of mobile hosts contending for access. Consequently, it is inefficient for long-distance connections among a small set of known hosts. However, small modifications at the MAC layer should overcome these problems, and this is an area of active research.22,23 With these modifications, we believe that Wi-Fi is the most promising short-term alternative. In contrast, a number of existing deployments have used proprietary technologies to provide village-level connectivity. The Akshaya network uses a combination of Wi-LAN (http://www.wi-lan. com) and wireless in local loop (WipLL, http://www.airspan.com) links. Unfortunately, the production volumes for these technologies are too low to enable costs comparable to mass-produced and standardized Wi-Fi. Tenet’s CorDECT solution (http://www.midascomm. com) has a low cost due to mass production, but it only delivers a 70-Kbps peak data rate. The CDMA450 (http://www.cdg.org/technology/ 3g/cdma450.asp) cellular technology operates in a lower frequency band (450 MHz) than traditional Global System for Mobile (GSM) communication and code division multiple access (CDMA) solutions, achieving larger coverage per base station and thus lower cost (a radius of 49 km versus 35 km using standard GSM). Early successes include China and Romania. Another promising alternative is Wi-Max, the IEEE 802.16 standard for wireless metropolitan-
area networks (http://grouper.ieee.org/groups/ 802/16). From a technical point of view, the standard is well suited to the problem at hand: It accommodates thousands of users, uses the available spectrum efficiently, covers ranges of more than 50 km, and has quality-of-service features. Unfortunately, at this time the standard is not finalized, and prestandard equipment sells at prohibitive prices. However, if WiMax shares the market success of Wi-Fi, it will likely evolve into a useful tool for rural networking infrastructure. One advantage of CDMA450 and 802.16 is that they require fewer large towers than a Wi-Fi mesh, thus reducing the overall cost for flat terrain. Early results. To demonstrate the efficacy of WiFi for rural connectivity, we have undertaken several deployments in India in addition to our wireless testbed in Berkeley. In one such link deployed for the Aravind Eye Hospital, we have demonstrated a bandwidth of 4 Mbps over a 10.5-km distance. This is a vast improvement over the 33.3 Kbps achieved by existing WiLL links. To address the inefficiency of Wi-Fi in a multihop network with long-range, point-to-point links, our group has identified MAC layer modifications that significantly increase bandwidth efficiency and minimize interference when multiple radios are installed on the same tower. Our approach changes the contention-based CDMA scheme to a time division multiple access scheme, yet still operates with standard, inexpensive 802.11 hardware. We are currently testing this approach on our wireless testbed in Berkeley and are planning to use it for future deployments of larger multihop networks in India. Intermittent networking. The focus of nearly all networking technology—including telephony, the Internet, and most radios—is on providing continuous, real-time, synchronous communication. However, the connectivity and power infrastructure required to support such continuous access may be too costly to justify deployment in many developing regions. We believe that many useful applications could instead be designed around asynchronous communication and only intermittent connectivity and that this form of infrastructure can be created at a significantly reduced cost. Some concrete examples of such applications include electronic form filling (much like postal forms), crop or commodity price updates, weather forecasts, offline search engine queries, and more traditional applications such as e-mail and voicemail. To this end, we have been continuing the development of a delay-tolerant networking architec-
ture.24 DTN provides reliable delivery of asynchronous messages (like e-mail) across highly heterogeneous networks that can suffer from frequent disruption. Applications that are developed for use with DTN do not make assumptions about the timeliness of network transactions, so DTN messages can be delayed or batched to reduce cost or power. DTN can leverage several well-known networking technologies, but it also can take advantage of some nontraditional—and perhaps nonobvious— means of communication. One example is data mules, which are devices used to ferry messages from one location to another in a store-and-forward manner. The DakNet project deploys data mules in the form of bicycles or motorcycles that make regular trips to remote villages.25 In many developing regions, rural bus routes or mail carriers regularly visit villages and towns that have no existing network connectivity. By applying DTN technologies, we can leverage the local transportation infrastructure as a novel networking infrastructure, providing connectivity (albeit intermittent) at a significantly reduced cost. Accommodating networks such as these that can be partitioned regularly and that can utilize highly varying underlying technologies (such as data mules) requires new approaches to network routing and naming and protocol design. Many routing
Figure 2. Students and village residents putting up a longdistance Wi-Fi link at sunset in southern India. Wireless technologies have the potential to provide a backbone that offers connectivity down to the village level.
June 2005
33
algorithms typically operate on a routing graph with one large, connected component A delay-tolerant and fail to function well when the graph is frequently partitioned, even when the outnetwork system can ages are completely predictable. However, by deliver data reliably taking advantage of in-network storage and even in the face mobile routers and using novel approaches of a wide range to message routing, a DTN system can of disruptions. deliver data reliably even in the face of a wide range of disruptions. Early results. In collaboration with others in the Delay Tolerant Networking Research Group (http://www.dtnrg.org), we have extended the DTN architecture24 and have proposed novel protocols for message transmission and forwarding. We have built a functional reference implementation and demonstrated its operation on a range of platforms from PCs to PDAs.26 We have also taken some theoretical steps toward developing routing algorithms that take into account both predicted and unexpected outages.27,28 Finally, we are looking at storage systems that provide availability and best-effort consistency on top of intermittent networks.
Inexpensive computing devices Most previous attempts to develop inexpensive computing devices were commercial failures.7 Looking at these projects, we can identify a number of factors that need to be understood when designing low-cost devices. In addition to determining whether to follow a model of shared or personal device ownership, researchers must decide whether to design general-purpose devices (like PCs) or devices tailored for specific tasks such as voice mail, form filling, or data collection. Taskspecific devices, such as cell phones, require less training and can be less expensive (via omission) than more general-purpose devices, yet they still can have sufficient volume for economies of scale. A third question concerns the correct form factor for the device: At one end of the spectrum are PCs and smaller variants like AMD’s 50x15 personal Internet communicator device (http://50x15. amd.com/home/default.aspx) that consume more power, while at the other end are handheld devices like cell phones. We are pursuing the design of smart phones as a way to increase the impact of cell phones. Other important research topics that are relevant to reducing the cost of computing devices are lowcost displays and batteries and specialized sensors for testing water, air, or soil quality and for disease detection. 34
Computer
User interfaces Although UI design and the field of human-computer interaction has made much progress, even the basic components of computing interfaces encounter problems in developing regions. Mouse motions and clicks are not intuitive to the inexperienced user, and differences in language and alphabet render a single keyboard much less useful. Despite some success with Unicode, the representation of most languages remains in progress, and there are still limited resources for localization of existing content and software. Even given representation and localization, text-based interactions with computers render these devices useless for illiterate or semiliterate users. Finally, the cost of the display is a significant component of the total cost of most computing devices. All of these issues imply significant ICT research challenges. We also expect DTN-based systems to present usability challenges due to their intermittent online status. This hypothesis is based on fieldwork in Uganda,29 in which the microfinance technology that we evaluated had support for both online and offline transactions. Several users who were newly introduced to this system seemed to encounter difficulty understanding disconnected operations and how they differed from real-time transactions, which appeared to be more intuitive. Speech. One avenue is finding mechanisms for real-time speech recognition by low-cost, powerconstrained devices. We believe this problem can be made tractable by dynamically switching among application contexts, thus limiting the number of words that are considered for recognition at any point in time, without limiting the total number of usable words. In our approach, centralized computers perform the computationally intensive tasks of speech modeling and training offline, leaving only the task of actual recognition for the simplified devices. Furthermore, we believe that many useful applications in the developing world can be constructed around a limited-vocabulary voice-based interface, similar to VoiceXML. Concrete examples of voiceamenable applications include disease diagnostic devices and commodity price distribution. Early results. As part of our fieldwork, we gathered samples of spoken numbers and simple commands from Tamil speakers in India and Berkeley. Initial results showed that we can achieve successful speech recognition training using relatively few speakers (about 30). We also completed the design for a low-power hardware-based speech recognizer chip.30 We estimate a 100 percent duty cycle power
dissipation of around 19 mW, which is orders of magnitude lower than existing solutions. Our simulations demonstrate accuracies of up to 97 percent for speaker-independent recognition in both English and Tamil, which we believe to be sufficient for UI recognition. The estimated size of 2.5 mm2 per chip in 0.18-µm technology confirms the potentially very low production cost. We are also pursuing the UI design for educational software, combining the issues of literacy, usability, learning, and motivation. Our initial fieldwork in India suggests that even though the UI must be sufficiently simple and attractive for children to use, it must also balance fun and learning. More importantly, the UI should promote the child’s personal development. For example, the results of our in-class experiment with a note-taking application suggest that UIs could be designed as scaffolds that develop more effective note-taking habits among students, as well as direct the user’s attention to critical topics during a lecture.31 Learner-centered design seems especially critical when there is a lack of qualified teachers.32
Power systems Although there continues to be much effort in rural electrification, the lack of power remains a fundamental obstacle to the adoption of ICT. Despite some success with providing solar power in India and Kenya, grid coverage is rare in rural areas. Even where electricity is available, we have found the low quality of power to be a huge practical problem for ICT. Note that lighting and cooking—the primary uses—are relatively immune to this limitation. Initial measurements with high-quality data loggers show huge variation in voltage and current, including short spikes of up to 1,000 V (for a 220 V AC system), long brownouts at 150 V, and frequent outages, both long and short. Using voltage stabilizers or UPS systems is an expensive solution that adds significantly to the real cost of ICT. We expect that changes in PC motherboards or power supplies would be a more effective strategy. We have also found great inefficiency in the solar systems used for ICT. A typical system uses solar panels and 12-V lead-acid batteries, an inverter to bring the voltage up to 220 V AC, and then PC power supplies to bring the voltage back down to 12 V DC. The standard power supply must generate +/-12 V, +/-5 V, and +3.3 V, all at worst-case currents; this requires a fundamentally expensive power supply.
Batteries are also a problem. Existing systems abuse the batteries, which leads to a very short life. The lead in lead-acid batteries has a huge environmental impact, and the common practice of adding acid to old batteries leads to health concerns. In the long term, we must replace lead-acid batteries with a more sustainable option: A built-in smart-charge controller seems a likely contributor to the solution.
The lack of power remains a fundamental obstacle to the adoption of ICT.
PRAGMATIC CHALLENGES Numerous pragmatic issues have an effect on ICT projects, including design and deployment strategies, transition planning, and the use of open source software.
Codesign, codeploy Field deployments repeatedly suggest that ICT projects for underserved populations pose unique design challenges. The design of the Simputer (http://www.simputer.org), one of the most prominent device projects in this space, had both a needs assessment and a UI design component, yet studies have suggested that the outcome was still hindered by a mismatch between technologists and users.7 In fact, the Simputer eventually became a PDA without development goals. Given that unique regional and cultural characteristics can play a large role in determining a project’s success, effective codesign requires using local knowledge to understand the appropriateness of certain technologies over others. Similarly, there is a strong case for coupling the use of local knowledge for design with partnerships in deployment. ICT researchers also benefit from the stable longterm agendas of NGOs compared with governments. Often the only organized bodies that reach remote populations are NGOs and local government bodies such as village councils. NGOs also tend to be easier to access and relatively transparent due to outside funding. However, we have found that motivating these groups requires building relationships and showing concrete early results.
Transition planning Experience suggests that a subtle issue for ICT projects is the need for a careful transition plan. For example, in ITC’s e-Choupal project, the kiosk system essentially disintermediates agricultural middlemen. Such situations are not at all uncommon—e-governance operations often depend upon corrupt officials supporting systems that disenJune 2005
35
A subtle issue for ICT projects is the need for a careful transition plan.
franchise their “gray” livelihoods. To get the support of such intermediaries, the transition plan must include their interests. Thus, agricultural kiosk projects might consider either hiring former middlemen as kiosk operators or simply paying them off. The failure to make a gentle transition can indeed cripple ICT projects that otherwise seem likely to succeed.33 Similarly, issues such as land record titling, widely considered as a key path to progress, present transitional challenges that could derail or even exacerbate developmental gaps. Examples from southern India and Cambodia present some negative impacts of technological innovation, when modern land-titling technologies caused destitution among the unprotected poor, who often have lived for years as squatters on government land (http://www.slate.com/Default. aspx?id=2112792). Being mindful of the informal economies that innovative technologies disrupt is essential, and the appropriate transition mechanisms must be incorporated to protect the interests of those with the least voice in the creation or deployment of these systems.
Open source software In general, developing regions have a default preference for open source software on the premise that it is free. In practice, however, “paid” software such as Microsoft Windows is also free due to piracy, and the legitimacy of software becomes an issue only when the funding source requires it. The more persuasive argument for open source is the ability to localize and customize. Governments in several African, Asian, and Latin American nations have funded efforts for customized local versions of software. This is a critical issue for developing regions where technological solutions often must be microlocalized for markets that are too small to attract the interest of large producers. We have also found serious problems with spyware and viruses in ICT pilots that using open source software could avoid—at least in the short run. That being said, there are important advantages to Windows. Due to its ubiquitous use, Windows knowledge is viewed as a valuable skill. In Brazil, Windows vocational training for the poor appears to be a viable business.34 More people are trained in using Windows, which can facilitate system administration. Our work at Berkeley is all open source, but our partners use both kinds of software and we expect this will continue. 36
Computer
O
ur objective is to convince researchers that ICT can play a large role in addressing the challenges of developing regions and that there is a real need for innovative research. The needs of developing regions are both great and unique, and thus lead to a rich and diverse research agenda. Moreover, as these needs are different from those of industrialized nations, market forces will continue to meet them at best accidentally. Because the needs are great, we must do better. Providing ICT for developing regions is not easy, but it is uniquely rewarding. We encourage the ICT research community to take on the challenge. ■
References 1. P. Polak, B. Nanes, and D. Adhikari, “A Low-Cost Drip Irrigation System for Small Farmers in Developing Countries,” J. Am. Water Resources Assoc., Feb. 1997. 2. K.R. Smith et al., “One Hundred Million Improved Cook Stoves in China: How Was It Done?” World Development, vol. 21, no. 6, 1993, pp. 941-961. 3. B. Milanovic, “True World Income Distribution, 1988 and 1993: First Calculation Based on Household Surveys Alone,” The Economic J., Jan. 2002, pp. 51-92. 4. H. De Soto, The Mystery of Capital: Why Capitalism Triumphs in the West and Fails Everywhere Else, Basic Books, 2000. 5. M. Yunus, “The Grameen Bank,” Scientific American, Nov. 1999, pp. 114-119. 6. N. Cohen, “What Works: Grameen Telecom’s Village Phones,” World Resources Inst., 2001, http:// www.digitaldividend.org/pdf/grameen.pdf. 7. R. Fonseca and J. Pal, “Bringing Devices to the Masses: A Comparative Study of the Brazilian Computador Popular and the Indian Simputer,” Proc. South Asia Conf., UC Berkeley: Trends in Computing for Human Development in India, Feb. 2005. 8. E. Servat et al., “Satellite Data Transmission and Hydrological Forecasting in the Fight against Onchocerciasis in West Africa,” J. Hydrology, vol. 117, 1990, pp.187-198. 9. World Health Organization, Success in Africa: The Onchocerciasis Control Programme in West Africa, 1974-2002, 2003. 10. D.D. Savigny et al., Fixing Health Systems: Linking Research, Development, Systems, and Partnerships, Int’l Development Research Centre, 2004. 11. Aravind Eye Hospitals, Ann. Activities Report, 2004. 12. UNESCO Institute for Statistics, Global Education Digest 2004: Comparing Education Statistics Across the World, UNESCO, 2004.
13. M.H. Akhand, “Disaster Management and Cyclone Warning System in Bangladesh,” Early Warning Systems for Natural Disaster Reduction, J. Zschau and A.N. Kuppers, eds., Springer, 2003. 14. A. Bayes, J. von Braun, and R. Akhter, “Village Pay Phones and Poverty Reduction: Insights from a Grameen Bank Initiative in Bangladesh,” Discussion Papers on Development Policy, ZEF Bonn Center for Development Research, June 1999. 15. Dutch Ministry of the Interior and Kingdom Relations, “Capgemini Netherlands and TNO. Does eGovernment Pay Off?” Nov. 2004. 16. P. Banerjee and P. Chau, “An Evaluative Framework for Analyzing e-Government Convergence Capability in Developing Countries,” Electronic Government, vol. 1, no. 1, 2004, pp. 29-48. 17. S. Nedevschi et al., “A Multidisciplinary Approach to Studying Village Internet Kiosk Initiatives: The Case of Akshaya,” Proc. Policy Options and Models for Bridging Digital Divides, Global Challenges of eDevelopment; http://www.globaledevelopment.org/ forthcoming.htm. 18. D.D. Richardson, R. Ramirez, and M. Haq, Grameen Telecom’s Village Phone Programme in Rural Bangladesh: A Multi-Media Case Study, TeleCommons Development Group, Mar. 2000. 19. S. Bhatnagar and A. Dewan, “Grameen Telecom: The Village Phone Program; http://poverty2.forumone. com/files/14648_Grameen-web.pdf. 20. International Monetary Fund, Balance of Payments: Statistics Yearbook, 2003. 21. D.M. Upton and V.A. Fuller, “The ICT e-Choupal Initiative,” Harvard Business School Case N9-604016, Jan. 2004. 22. P. Bhagwat, B. Raman, and D. Singh, “Turning 802.11 Inside Out,” Proc. Hot Topics in Networks (HotNets-II), ACM Digital Library, 2003. 23. B. Raman and K. Chebrolu, “Revisiting MAC Design for an 802.11-Based Mesh Network,” Proc. Hot Topics in Networks (HotNets-III), ACM Digital Library, 2004. 24. K. Fall, “A Delay-Tolerant Network Architecture for Challenged Internets,” Proc. ACM SIGCOMM Conf., ACM Press, 2003, pp. 145-158. 25. A.S. Pentland, R. Fletcher, and A. Hasson, “DakNet: Rethinking Connectivity in Developing Nations,” Computer, Jan. 2004, pp. 78-83. 26. M. Demmer et al., Implementing Delay-Tolerant Networking, tech. report IRBTR-04-020, Intel Research Berkeley, 2004. 27. S. Jain et al., “Using Redundancy to Cope with Failures in a Delay-Tolerant Network,” to appear in Proc of ACM SIGCOMM Conf., ACM Press, Aug. 2005. 28. S. Jain, K. Fall, and R. Patra, “Routing in a Delay
Tolerant Network,” Proc. ACM SIGCOMM Conf., ACM Press, Aug. 2004, pp. 27-34. 29. M. Kam and T. Tran, “Lessons from Deploying the Remote Transaction System with Three Microfinance Institutions in Uganda,” http://www.csberkeley.edu/ ~mattkam/publications/UNIDO2005.pdf. 30. S. Nedevschi, R. Patra, and E. Brewer, “Hardware Speech Recognition on Low-Cost and Low-Power Devices,” Proc. Design and Automation Conf., 2005. 31. M. Kam et al., “Livenotes: A System for Cooperative and Augmented Note-Taking in Lectures,” Proc. ACM Conf. Human Factors in Computing Systems (CHI 05), ACM Press, 2005, pp. 531-540. 32. C. Quintana et al., “Learner-Centered Design: Reflections and New Directions,” Human-Computer Interaction in the New Millennium, J. Carroll, ed., Addison-Wesley Professional, 2001. 33. R. Kumar, “e-Governance: Drishtee’s Soochana Kendras in Haryana, India,” Proc. South Asia Conference: Trends in Computing for Human Development in India, 2005. 34. C. Ferraz et al., “Computing for Social Inclusion in Brazil: A Study of the CDI and Other Initiatives,” Bridging the Divide 2004; http://bridge.berkeley. edu/pdfs/cdi_Brazil_report.pdf.
Eric Brewer is a professor of computer science at the University of California, Berkeley. He received a PhD in electrical engineering and computer science from MIT. Contact him at brewer@cs. berkeley.edu. Michael Demmer is a PhD student at UC Berkeley, studying distributed systems and intermittent networking. He received a BS in computer science from Brown University. Contact him at demmer@ cs.berkeley.edu. Bowei Du is a PhD student at UC Berkeley studying distributed systems. He received a BS in computer science from Cornell University. Contact him at
[email protected]. Melissa Ho is a research staff member at UC Berkeley. She received an MSc in data communications, networks, and distributed systems from University College London. Contact her at
[email protected]. Matthew Kam is a PhD student in computer science at UC Berkeley working on educational technology and human-computer interaction. He received a BA in economics and a BS in electrical engineering and computer science, also from UC Berkeley. Contact him at
[email protected]. June 2005
37
Sergiu Nedevschi is a PhD student in computer science at UC Berkeley, studying ICT for developing regions and wireless networking. He received a BS in computer science from the Technical University of Cluj-Napoca, Romania. Contact him at sergiu@ cs.berkeley.edu. Joyojeet Pal is a PhD student in city and regional planning at UC Berkeley, where he received an MS in information management and systems. Contact him at
[email protected]. Rabin Patra is a PhD student in computer science at UC Berkeley, studying architecture and wireless networking. He received a B.Tech (bachelor of
technology) in computer science and engineering from the Indian Institute of Technology, Kharagpur. Contact him at
[email protected]. Sonesh Surana is a PhD student in computer science at UC Berkeley, studying networking and ICT for healthcare. He received a BS in computer science from Carnegie Mellon University. Contact him at
[email protected]. Kevin Fall is a senior networking researcher at Intel’s Berkeley Research Laboratory. He received a PhD in computer science and engineering from the University of California, San Diego. Contact him at
[email protected].
Welcomes Your Contribution Computer magazine looks ahead to future technologies
• Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. • Articles selected for publication in Computer are edited to enhance readability for the nearly 100,000 computing professionals who receive this monthly magazine. • Readers depend on Computer to provide current, unbiased, thoroughly researched information on the newest directions in computing technology. To submit a manuscript for peer review, see Computer’s author guidelines:
www.computer.org/computer/author.htm 38
Computer
COMPUTING PRACTICES
Reflection and Abstraction in Learning Software Engineering’s Human Aspects Intertwining reflective and abstract modes of thinking into the education of software engineers, especially in a course that focuses on software engineering’s human aspects, can increase students’ awareness of the discipline’s richness and complexity while enhancing their professional performance in the field. Orit Hazzan Technion-Israel Institute of Technology
James E. Tomayko Carnegie Mellon University
T
he complexity of software development environments includes the profession’s cognitive and social aspects. A course designed to increase students’ awareness of these complexities introduces them to reflective mental processes and to tasks that invite them to apply abstract thinking. For the past three years, we have taught a Human Aspects of Software Engineering course at both the Technion-Israel Institute of Technology and the School of Computer Science at Carnegie Mellon University. This course aims to increase software engineering students’ awareness of the richness and complexity of various human aspects of software engineering and of the problems, dilemmas, questions, and conflicts these professionals could encounter during the software development process.1
REFLECTION AND ABSTRACTION IN SOFTWARE ENGINEERING The reflective and abstract modes of thinking play a key role in the education of software engineers and the practice of software engineering.
Reflective thinking Generally speaking, the reflective practitioner perspective, introduced by Donald Schön,2,3 guides professionals such as architects, managers, musicians, 0018-9162/05/$20.00 © 2005 IEEE
and others to rethink and examine their work during and after accomplishing the creative process. This concept is based on the assumption that such reflection improves the proficiency and performance within such professions. Analysis of the software engineering field and the kind of work that software engineers usually perform supports applying the reflective practitioner perspective to software engineering in general and to software engineering education in particular.4,5 The importance of reflection as a mental habit in the software engineering context derives mainly from two factors: • the complexity involved in the development of software systems, regardless of whether we examine this complexity from an engineering, architectural, or design point of view, and • the crucial role of communication among teammates in the successful development of a software system. The first factor emphasizes improving understanding of our own mental processes. We can achieve this by adopting a reflective mode that teaches us about our own ways of thinking. The second factor implies that improving communication within software development teams requires
Published by the IEEE Computer Society
June 2005
39
that individuals increase their awareness of their own mental processes as well as those of Abstraction’s other team members. Adopting the reflective practitioner methodinherent role ology as a cognitive tool could be helpful to in software programmers as they develop software sysengineering tems. Likewise, adopting a reflective approach processes to software engineering education could help underscores its role students improve their understanding of software creation’s essence. Given the richness in increasing of software development processes, this apstudents’ awareness proach could produce a vast object collection of the benefits they that provides subjects for further reflection. can gain by using We could start with the actual creations—the software systems—and reflect on how practithis approach. tioners develop and use algorithms, then move on, for example, to skill-related topics such as development approaches, topics related to human-computer interaction, and ways of thinking. To illustrate the appropriateness of applying the reflective practitioner perspective to software engineering processes, we present Schön’s description of the reflection subjects in an architectural context. When practitioners reflect on their practice, the possible objects of this reflection can be as varied as the kinds of phenomena they address and the systems of knowing-in-practice they bring to bear. The practitioners could reflect on the tacit norms and appreciations that underlie a judgment or on the strategies and theories implicit in a behavior pattern. Further, the practitioners could reflect on the feeling for a situation that has led them to adopt a particular course of action, the way in which they have framed the problem, or the role they have constructed within a larger institutional context.2
The Studio teaching approach—implemented in the School of Computer Science at Carnegie Mellon University6,7—inspires reflective processes. The course we describe here integrates a framework for reflective processes into the education of software engineers in general.
Abstract thinking The disciplines of computer science and software engineering have developed heuristics to overcome the widely acknowledged cognitive complexity of software development. These heuristics have been subjected to explicit discussion in the computer science and software engineering literature, and they have become an integral part of the two disciplines. One of these heuristics, abstraction, focuses on solving complex problems. In addition to general 40
Computer
abstraction processes, software developers should think in terms of different abstraction levels and move between them during the process of software development. For example, when trying to understand customers’ requirements during the first stage of development, developers must take a global view of the application, which requires a high level of abstraction; when coding a specific class, they should adopt a local perspective, which uses a lower abstraction level. Obviously, additional levels exist between these two levels. However, abstracting is a nontrivial cognitive process and teaching abstraction presents a complex challenge.8 Abstraction’s inherent role in software engineering processes underscores its role in increasing students’ awareness of the benefits they can gain by using this approach and adopting it as a mode of thinking. It also explains why we chose abstraction as a central motive for the Human Aspects of Software Engineering course. Abstraction can be expressed in different ways, all of which guide us to overcome cognitive complexity by ignoring irrelevant details at specific stages of problem-solving situations. Based on previous work in this area, we have identified three ways in which abstraction can be expressed.1 First, we can observe what a group of objects have in common and capture this essence in one abstract concept. In such cases, abstraction leads us to observe what is common to a set of objects and to ignore irrelevant differences among them. We can capture these characteristics with a mathematical concept, a class in object-oriented development processes, and so on. In this sense, abstraction maps from many to one. For example, the hierarchy of mammals reflects abstraction that in its higher levels ignores differences between them, while at lower levels it distinguishes between different characteristics. Design patterns provide an example of the expression of this abstraction in the context of software engineering.9 Second, we can select the appropriate abstraction level of the language to describe a specific solution. This language should not be based on the tools provided by the programming language used for the actual software development. In this case, abstraction helps developers think about the problem in appropriate conceptual terms, without being guided to think in a particular programming language’s terms. If developers do not use such abstractions, computer languages would force them to delve into irrelevant details too early in development, thus taking control of the programming process. In such cases, abstraction
bridges the gap between natural language and the programming language. The evolution of programming languages reflects this idea. While the first programming languages differed significantly from human languages, software developers can use today’s programming languages to express ideas in ways that are more similar to human language. Third, we can apply abstraction to describe objects by their characteristics rather than by how they are constructed or work. Accordingly, C.A.R. Hoare10 explained that an abstract command specifies the properties of the computer’s desired behavior without prescribing in detail how that behavior will be achieved. This idea can be expressed by, for example, writing one set of instructions for manipulating different object types, with lower abstraction levels specifying the different ways these instructions work on the different objects. This approach is applied by setting abstraction barriers.11 Based on the benefits abstraction can offer software developers, we focus careful attention on students’ use of abstract thinking. Specifically, to help students become familiar with the idea of abstraction and its relevance and contribution to software development processes, in each lesson we pose at least one task that requires abstract thinking. After they have solved the task, students engage in a reflection process to help increase their awareness of the abstract mode of thinking in general and in software development situations specifically. This gives them experience in observing how working with different abstraction levels can improve and enhance development.
COURSE DESCRIPTION The Human Aspects of Software Engineering course we teach offers specific reflection-oriented and abstraction-oriented tasks that the students are asked to perform. This approach is described according to the following lesson topics.
Software engineering’s nature This first lesson highlights that software engineering differs from many other professions because, despite its technical nature, human issues remain central to this discipline. This assertion sets the stage for the message delivered throughout our entire course. Lesson objectives focus on increasing students’ awareness that most software development successes and failures stem from people issues, not technology issues. This lesson also explores students’ analysis of the effects of human interaction on software development processes.
To achieve these objectives, we present several software engineering scenarios. We The Human Aspects ask students to refer to human issues these of Software scenarios raise. We conduct this activity at Engineering course the course’s beginning, when students lack familiarity with theories introduced later in we teach offers the course. By doing so, we seek to increase specific reflectionstudents’ awareness, from day one, of the oriented and significant influence human factors have on abstraction-oriented software development processes. tasks that We refer to the topics students suggest in this lesson throughout the remainder of the students perform. course. This process provides students with the opportunity to gradually increase their awareness of human-related issues throughout the course and afterward. We present the scenarios using low levels of abstraction to bring to the surface abstract topics, such as communication’s role in teamwork. Using a low level of abstraction is appropriate when introducing new concepts such as human aspects of software engineering. In later lessons, we address some of the details described in this lesson at higher abstraction levels.
Software engineering methods The second lesson focuses on methods that can be applied to software development processes. Naturally, we focus on the human aspects of each method, emphasizing issues such as interactions between team members and with customers. We limit the discussion to the human aspects of three software development methods: the spiral model, the unified process, and extreme programming. Lesson objectives focus on helping students understand the shape of the models involved in these software development methods and increasing their awareness of the human aspects of each method. To achieve these objectives, we ask students to reflect on activities they conduct. For example, they might reflect on the process they used to develop their last software project and identify the method that dominated that process. We use the paradigm concept to express abstraction. Specifically, for each of the three methods, we discuss the implementation of the four basic software engineering paradigm activities—specifying, designing, coding, and testing. From an abstract viewpoint, the software engineering paradigm captures the shared essence of each software development method’s core activities while ignoring the details of their implementation by each method. To elevate students’ awareness of the different abstraction levels involved, we ask them to explain the speJune 2005
41
Abstraction can be expressed by presenting situations that characterize both the details and the general ethical issues raised.
cific implementation of the software engineering paradigm in each of the three development methods.
Working in software teams
The third lesson deals with teamwork, a central characteristic of software development. This lesson emphasizes that teamwork is essential when developing any sizable software system. It delivers the message that having an awareness of various aspects of teamwork and different approaches to dealing with dilemmas in the software development process can help developers cope with difficult situations. Lesson objectives focus on differentiating between the kinds of teams, considering factors such as structure, benefits, disadvantages, suitable projects, and team member roles. The lesson also covers increasing awareness of teamwork-related problems and suggests possible solutions. To achieve these objectives, we ask students to reflect on their experience with teamwork. For example, we may ask that they work in teams on tasks that raise dilemmas, analyze the outcome of these tasks, and reflect on the process the team went through. Based on this reflection, we ask them to suggest solutions for similar situations in the future. This lesson highlights abstraction as it relates to software team structures. Specifically, we discuss the influence of hierarchies on teamwork in the context of software development. With regard to hierarchical influences, we mention that developers in low levels of the hierarchy may have only a narrow understanding of the development environment in general and the developed software in particular. If programmers are familiar only with the details, they cannot conceive of the developed application from a higher abstraction level. This narrow perspective can limit their understanding of their assigned development activities. Specifically, they may not understand how the details connect to each other and how they comprise the big picture. Based on this observation, we ask students to suggest examples that illustrate connections between a developer’s level of familiarity with an application—that is, having or not having multiple perspectives on different levels of abstraction—and the development of a specific task for which a developer is responsible.
Software engineering code of ethics The fourth lesson focuses on the notion of ethics in general and reviews the Code of Ethics of Soft42
Computer
ware Engineering (www.computer.org/tab/seprof/ code. htm) in particular. At this point, we focus on how to improve students’ ability to predict situations in software development processes that could cause harm from an ethical perspective and discuss how such situations can be avoided. Reflective processes are integrated in this lesson as learners are requested to reflect on their personal experience and to tie it to the Code. Abstraction can be expressed in this lesson by presenting situations that characterize both the details and the general ethical issues raised. This exposition method helps students identify specific situations as instances of more abstract cases for which the Code outlines accepted norms in the software engineering community. We suggest that such recognition can guide students to act according to the Code. To illustrate this idea, we ask students to tell a story related to software engineering that raises an ethical dilemma and to suggest possible solutions based on the Code. Then, we ask them to tell another story for which the solution can be derived from the solutions offered for the first story. Based on what the two stories have in common, we then ask the students to explain why these similar features are sufficient for deriving the same solutions. Abstraction can also be expressed by how the Code is originally presented. In its preamble, the Code explicitly states: … the short version of the code summarizes aspirations at a high level of abstraction. The clauses that are included in the full version give examples and details of how these aspirations change the way we act as software engineering professionals. Without the aspirations, the details can become legalistic and tedious; without the details, the aspirations can become high-sounding but empty; together, the aspirations and the details form a cohesive code.
Software as a product The fifth lesson examines the human aspect of software engineering from the customer’s perspective. It focuses on how developers and users can collaborate to define and develop software requirements that fulfill customers’ needs. Lesson objectives focus on viewing the customer as part of the development environment and on increasing awareness that requirements change forms a natural and necessary part of the software development process. To achieve these objectives, we have students assume the role of customers so that they can reflect on the development process from this perspective.
For example, students could play customers who need a software system for Web-based surveys. In this role, they must define the customer requirements for that system from a non-software-engineering perspective. After they list the requirements, we ask the students to analyze the requirements by addressing topics such as the kinds of requirements listed. Following this analysis, they reflect on the process for defining the requirements and explain why requirements change is so predominant in software engineering and why the percentage of software tools that meet customers’ needs remains relatively low. With respect to the abstract perspective, we assert that customers’ requirements are themselves an expression of abstraction because, instead of discussing the details of implementing these requirements, developers must think in terms of a high abstraction level and describe only the requirements’ essence.
International perspective on software engineering The sixth lesson extends the scope of our examination beyond the team and organization to address the influence of specific events on the global hightech economy and the nature of software engineering worldwide. In addition, we discuss cultural topics such as gender and minorities in the high-tech industry. Lesson objectives focus on increasing students’ awareness of the influence culture has on software engineering processes and on becoming familiar with issues related to gender and minorities. To achieve these objectives, we ask students to analyze and reflect on their personal experience in software development from a cultural perspective. Abstraction is expressed by addressing cultural issues. Indeed, the concept of culture is itself an abstraction because it captures in one term the similar behavior of many people, while ignoring differences among individuals.
Program comprehension The seventh lesson focuses on program comprehension—how software developers understand computer programs. The lesson also deals with related topics such as code review. Lesson objectives focus on acknowledging the importance of programming style and its influence on comprehending programs and on observing connections between programming style and the daily life of software engineers. These objectives can be achieved by, for example, asking students to develop two computer programs that perform the same task: One program
should be comprehended easily, the other only with considerable effort. We then ask students to reflect on the development process and analyze what thoughts guided them in the development of each program. Abstraction plays a central role in this lesson because some program comprehension theories use abstraction as their organizing idea.12 The relevance of abstraction in this context is clear. The process of program comprehension requires moving between levels of abstraction, which in turn requires a high level of awareness. To increase students’ awareness of abstraction’s role in this process, we ask them to deal with questions such as “In what sense does awareness of different abstraction levels improve your program comprehension processes?”
We assert that customers’ requirements are an expression of abstraction because developers must think in terms of a high abstraction level and describe only the requirements’ essence.
Software engineering learning processes The eighth lesson, which addresses the need for ongoing learning in software engineering, focuses on reflective practice and its relevance to software engineering: We ask students to reflect on their experiences throughout the entire course. Lesson objectives focus on appreciating the importance of learning processes in software engineering generally and of using a reflective mode of thinking particularly. Students also identify situations in which they can learn by applying reflective thinking and communal learning processes. To achieve these objectives, we ask students to analyze specific situations in software development that might benefit from reflective thinking. We also ask them to describe situations in which developers can learn from reflection on their own previous experience and, based on such reflective processes, improve their performance. The learning organization, specifically Peter Senge’s framework13 for learning organizations, receives special attention in this lesson. According to Senge, one dimension of a learning organization, systems thinking, helps us see more effectively how to change systems and act more in tune with the larger processes of the natural and economic world around us. Systems thinking thus provides a conceptual framework based on patterns that explain different events as instances of one phenomenon. Thus, systems thinking centers on abstraction because it guides us to ignore details and examine what different events have in common. To illustrate this idea, we ask students to suggest examples from the software engineering world that show the idea of systems thinking and to explain June 2005
43
The Six Stages of Case Study Construction The final two lessons of our course guide students through a process in which they construct and analyze case studies. The following sixstage process provides a toolset that students can use to construct and analyze a variety of case studies: • Stage 1: Select a topic. Think about a topic to discuss that you find interesting and relevant. • Stage 2: Analyze the topic. Determine whether the topic merits being the center of a case study. Ask the following questions: What kinds of activities do developers of the selected topic participate in? What human aspects of software engineering does the topic address? Does the topic connect to the team as a whole or to individual team members? If the answers to these questions indicate that the topic is rich enough and can be connected to different issues in a software development environment, it might be an appropriate central topic for a case study. • Stage 3: Imagine possible situations. Identify at least two software engineering situations in which the topic might be relevant. This will help determine whether there are specific software engineering situations in which the topic you want to pursue can be expressed tangibly. • Stage 4: Write the case study. Be sure to include the main issues you want to address and make the case study as vivid as possible. • Stage 5: Check the scope. After writing and editing the case study, determine if other, related topics can be added to the case. Make sure not to alter the focus: Does the case study properly reflect the main message you want to convey? Are the connections between the case study’s different topics clear? • Stage 6: Develop questions about the case study. These questions should be stimulating and appropriate. Also answer the questions to check whether they are interesting enough.
the connections between those examples and abstraction.
Software engineering perspectives The ninth lesson explores different viewpoints of the software development process. Given the software engineering discipline’s relative youth, developers naturally bring diverse approaches to the field. Lesson objectives focus on increasing students’ awareness that they can view software engineering from different perspectives, each of which highlights a different aspect of the discipline, and that they can observe what elements of each perspective best fit their values and perception of the discipline. To achieve these objectives, we ask students to analyze different situations in software engineering from the various perspectives presented in the lesson. In addition, we ask them to reflect on how 44
Computer
each perspective adds to their understanding of the discipline’s essence. We demonstrate abstraction in this lesson by analyzing properties of the software engineering profession rather than describing its details and procedures. In this spirit, we ask students to suggest professions that could be analogous to the discipline of software engineering and to explain how these professions both resemble and differ from software engineering.
Software development principles The tenth lesson highlights heuristics such as abstraction and successive refinements. Specifically, the lesson addresses connections between such heuristics and software engineering’s cognitive aspect. Although the entire course highlights abstract thinking, it plays a central role in this lesson, in which the objectives focus on increasing familiarity with the idea of abstraction and its relevance to software development processes. We also strive to increase students’ awareness of situations in which thinking at different levels of abstraction could enhance software development processes. To achieve these aims, we present several software development activities for which abstract thinking is critical. We ask the students to reflect on their efforts in participating in these activities and to analyze how thinking in different levels of abstraction supported their work.
Software and software engineering’s human aspects The eleventh lesson shows that software characteristics cannot be isolated from the people who develop the software and that each characteristic actually demands something from the developers. Lesson objectives focus on making students aware that producing qualified software systems depends largely on developers’ conceptions of what software is. We also urge students to suggest development practices that ensure the software they produce meets its specifications, both from the customer’s perspective and as a qualified product. To achieve these aims, we ask students to explore what characterized the software systems they developed in the past and to reflect on how the process they used led to the production of software with these characteristics. We express the abstract dimension in this lesson by analyzing software not through its implementation or behavior, but rather through its properties.
Software engineering history The twelfth lesson presents the history of soft-
ware engineering, emphasizing that the human aspects play a significant role in this historical process. This lesson shows that the history of software engineering cannot be told without examining how people reacted to software development. Lesson objectives focus on increasing familiarity with the main stages and events in software engineering’s history and becoming more familiar with the nature of that history. While it does not include particular reflection processes, this lesson does encourage students to associate its content with what has been taught in previous lessons. It also describes how abstraction became part of the software engineering paradigm.
Case study analysis The last two lessons aim to increase students’ awareness, sensitivity, and analysis skills when they participate in software development environments. We based the case study analyses for these two lessons on theories learned in previous lessons. As the “The Six Stages of Case Study Construction” sidebar describes, these lessons guide students through a process in which they construct and analyze a case study. When appropriate, we also ask the students to connect the case study under discussion to their own software development experience.
I
ntroducing the reflective and abstract thinking emphasized in the course we have described could be beneficial in other software engineering courses. In particular, we suggest introducing reflective and abstract thinking processes into courses that focus on improving students’ analytical skills and problem-solving abilities; emphasizing software engineering’s multifaceted nature; basing the lessons on students’ interaction, encouraging them to learn from their peers, and offering experience with teamwork and information sharing; and guiding students to look for different viewpoints, controversial issues, dilemmas, and conflicts they probably will face in the future. ■
References 1. J. Tomayko and O. Hazzan, Human Aspects of Software Engineering, Charles River Media, 2004.
2. D.A. Schön, The Reflective Practitioner, BasicBooks, 1983. 3. D.A. Schön, Educating the Reflective Practitioner: Towards a New Design for Teaching and Learning in the Profession, Jossey-Bass, 1987.
4. O. Hazzan, “The Reflective Practitioner Perspective in Software Engineering Education,” J. Systems and Software, vol. 63, no. 3, 2002, pp. 161-171. 5. O. Hazzan and J. Tomayko, “The Reflective Practitioner Perspective in eXtreme Programming,” Proc. XP Agile Universe, LNCS vol. 2753, Springer Verlag, 2003, pp. 51-61. 6. J.E. Tomayko, “Teaching Software Development in a Studio Environment,” Proc. SIGCSE Technical Symp., vol. 23, no. 1, ACM Press, 1993, pp. 300303. 7. J.E. Tomayko, “Carnegie Mellon’s Software Development Studio: A Five-Year Retrospective,” Proc. SEI Conf. Software Eng. Education; www.contrib. andrew.cmu.edu/usr/emile/studio/coach.htm. 8. J. Kramer, “Abstraction—Is It Teachable? ‘The Devil Is in the Detail,’” keynote address, Proc. 16th Conf. Software Eng. and Training, IEEE Press, 2003, p. 32. 9. E. Gamma et al., Design Patterns, Addison-Wesley Professional, 1995. 10. C.A.R. Hoare, “Mathematics of Programming,” Byte, Aug. 1986, pp. 115-124, 148-150. 11. H. Abelson and J.G. Sussman, Structure and Interpretation of Computer Programs, MIT Press and McGraw-Hill, 1986. 12. A.M. Vans, A. von Mayrhauser, and S. Gabriel, “Program Understanding Behavior during Corrective Maintenance of Large-Scale Software, Int’l J. Human-Computer Studies, vol. 51, 1999, pp. 31-70. 13. P.M. Senge, The Fifth Discipline: Fieldbook, Currency, 1994.
Orit Hazzan is a senior lecturer in the Department of Education in Technology and Science at the Technion-Israel Institute of Technology. Her research focuses on computer science education in general and teaching human aspects of software engineering in particular. Specifically, she examines cognitive and social processes in the teaching and learning of software development methods. Hazzan received a PhD in mathematics education from the Technion-Israel Institute of Technology. Contact her at
[email protected]. James E. Tomayko is a teaching professor in the School of Computer Science at Carnegie Mellon University. His research interests include real-time embedded systems and the history of computing. Tomayko received a PhD in the history of technology from Carnegie Mellon University. He is a member of the IEEE Computer Society. Contact him at
[email protected]. June 2005
45
COVER FEATURE
Project Numina: Enhancing Student Learning with Handheld Computers Despite recent technological advances, the promise of the virtual classroom with anywhere, anytime access and rich data sharing remains largely unfulfilled. To address this need, Project Numina researchers are developing a mobile learning environment that fosters collaboration among students and between students and instructors.
Barbara P. Heath East Main Educational Consulting
Russell L. Herman Gabriel G. Lugo James H. Reeves Ronald J. Vetter Charles R. Ward University of North Carolina Wilmington
46
T
he widespread emergence of wireless networks and the rapid adoption of handheld computing devices have made mobile computing a reality. Hotspots are now commonly found at coffee shops, bookstores, airports, hotels, and other public spaces. In line with this trend, a growing number of college campuses are also becoming “unwired.” This new infrastructure is enabling the development of a whole new generation of educational applications and associated pedagogical approaches to teaching and learning. A convergence of technologies is giving small computing platforms, such as the pocket PC, the ability to support telecommunications, audio and video applications, mathematical computations, word processing, electronic spreadsheets, and standard PDA functions. Wrapped into a single device, the handheld can replace all of the traditional electronic hardware students commonly carry in their backpacks such as a cell phone, MP3 player, and calculator. Laptops and tablet PCs are also more popular than ever on campus.1,2 College students across the US are increasingly making extensive use of mobile computing devices that they carry with them at all times. As this technology becomes more accessible, colleges and universities will seek new ways to integrate it in the classroom.3 Unfortunately, few high-quality educa-
Computer
tional applications are currently available for handhelds, especially in mathematics and science. A handful of US institutions, including Stanford University, Des Moines Area Community College, and California State University-Monterey Bay now require such devices in some courses, while hundreds of others are testing their feasibility in pilot programs.4,5 More than 77 percent of US colleges and universities report some use of wireless networks, with many providing seamless network access to their physical boundaries.6 In 1999, the University of North Carolina Wilmington began rolling out its campuswide wireless network infrastructure. At the same time, Project Numina (http://aa.uncw.edu/numina) was formed to test the use of mobile devices in instruction. Over the years, an interdisciplinary team of UNCW researchers in chemistry, computer science, and mathematics has experimented with a wide range of handheld devices and software applications for use primarily in undergraduate classrooms. Our extensive experience with both commercially available and homegrown software suggests that using handhelds engages students more fully in learning and may improve understanding of mathematical and scientific concepts. On the other hand, we have also identified many shortcomings of handheld technology.
Published by the IEEE Computer Society
0018-9162/05/$20.00 © 2005 IEEE
Table 1. Handheld devices used in Project Numina. Year purchased
Device
Operating system
Processor speed
RAM/ROM
Purchase price
1999 2000 2002 2003 2004 2005
HP Jornada 690 HP Jornada 720 HP Jornada 568 Dell Axim X5 HP iPAQ 2215 Dell Axim X50v
Windows CE Handheld PC Professional Edition v3.01 Windows Handheld PC 2000 Windows Pocket PC 2002 Windows Pocket PC 2002 Windows Mobile 2003 Windows Mobile 2003 Second Edition
133 MHz 206 MHz 206 MHz 400 MHz 400 MHz 624 MHz
32/32 Mbytes 32/32 Mbytes 64/32 Mbytes 64/32 Mbytes 64/32 Mbytes 64/128 Mbytes
$749 $749 $549 $450 $499 $425
To address these deficiencies, we are developing a mobile learning environment designed to foster collaboration among students, and between students and faculty, in a virtual learning community.
PROJECT BACKGROUND Although UNCW has been widely using instructional technology for nearly 15 years, it has had little impact on some important areas. For example, the student role in class remains generally passive, tests do not contain multimedia components, and computers are not readily available for calculations and modeling in large classrooms. We postulated that handhelds’ mobility and touch-screen interactivity made them ideal for classroom and laboratory use. In addition, some current-generation handhelds are powerful enough to run applications, including molecular modeling and numerical analysis, that only a few years ago required a supercomputer. Project Numina focuses on creating, testing, and evaluating handheld computing applications for college-level science and mathematics students and instructors. Table 1 lists the devices used throughout the project’s history in both large classroom (125-seat auditorium) and small laboratory settings. With funding from Pearson Education and UNCW, the Project Numina research team initially acquired 100 Hewlett-Packard Jornada 690/720 series handheld computers and installed IEEE 802.11b wireless networks in the chemistry, mathematics, and computer science buildings. In 1999, such networks were still in their infancy and costs were high—for example, a single Cisco Aironet 340 access point cost $980. Using these devices, the team developed various novel applications including preliminary versions of a Web-based interactive student response system, graphing software for use with Pocket Excel (GraphData), and instructional materials built around the use of Pocket HyperChem, a molecular modeling program developed by Hypercube for handhelds. Other student applications included Microsoft Pocket Word, Pocket Excel, and Internet Explorer; an introductory chemistry eBook; and a few DOS-based statistical applications.7
In early 2001, the pocket PC began to supplant the handheld PC in the wake of the Palm Pilot’s success. Although happy with the somewhat larger HPC, we soon realized that the market shift to the PPC was too important to ignore. In April of that year, we obtained an internal seed grant to purchase 24 PPCs and began studying how to employ these new devices for instruction. Figure 1 shows a PPC with data acquisition equipment and keyboard for use in science labs at UNCW.
HANDHELD APPLICATIONS Although the hardware has changed during Project Numina, the classroom implementation and applications have remained consistent except for software upgrades and lessons learned. In a typical scenario, students arrive at a lecture or lab and take a machine from a charging cart. They usually receive their own device but at times may share one while working in groups. After completing their lesson, the students return the machines to the charging cart before leaving the classroom or lab.
Student response system One of Project Numina’s most popular applications is the student response system (SRS).7 Figure 1. Project Numina hardware. UNCW students use a pocket PC, such as this Dell Axim X5, with temperature probe and CompactFlash data logger, in science labs.
June 2005
47
(a) (b) Figure 2. Project Numina student response system: (a) Classroom view and (b) student view. Students point to a graph region on their touch screen, and the system displays the graph with all of the students’ points on it.
(a)
(b)
(c)
(d)
Figure 3. Sample mathematical and scientific applications for PPCs: (a) Pocket HyperChem; (b) RDcalc; (c) Pocket Oscillator; and (d) HandDee Spectrum Analyzer.
48
Computer
Students use handheld computers to respond to the instructor’s questions, and the SRS stores these responses in a remote database and collectively displays them at the front of the classroom. SRS is a server-side Web application implemented as an active server page. It is completely Web-based and therefore requires no special software on the client side other than a Web browser. The instructor poses a question in a multiplechoice, true-false, or yes-no format and directs students to a Web site that generates a Web form on their computer screens through which they submit their responses. Multiple question-and-answer scenarios are possible. A back-end database stores only responses to questions, not information about the student, so responses are anonymous. In contrast to the typical 2-3 percent response rate in a traditional classroom setting, nearly all students using SRS answer the instructor’s questions. This suggests that students are more comfortable responding to a question when they see others doing the same and when their responses are anonymous. Another benefit of SRS is that instructors can see immediately how well students comprehend a specific topic they have presented. This real-time feedback enables instructors to make just-in-time decisions on instructional content to better meet learners’ needs. Unlike today’s popular infrared and radio-based “clicker” technologies,8 SRS can exploit the PPC’s ability to run Macromedia Flash applications. This allows SRS to provide more interactive and interesting response types. For example, Figure 2 shows a Macromedia Flash application in which students presented with a graph point to a region on their touch screen, and the system displays the graph with all of the students’ points on it.
Table 2. Pocket PC mathematics applications. Category
Applications
Calculators
HP 15C, Math Tablet, MRI Graphing Calculator, PDAcalc (classic and matrix), Pocket Atlantis, RDcalc, TI59ce Euclid, Formulae 1, Giac/Xcas—qdCAS, JACAL, Math Xpander, Maxima Data Harvest LaTeX Autograph, Gnuplot, GraphData, PDAGraphiX, SpaceTime, Vinny Graph Mandelbrot Explorer, Lisa, logic applications, PocketFract, RPS/Fract, Rubik’s Cube applications, SLAE Solver HandDee Spectrum Analyzer, Pocket Oscillator, Pocket RTA, Pocket TV Browser, PocketDivX, Windows Media Player CEPython with mathematics packages, Common Lisp, Embedded Visual Basic, GnuPG, Macromedia Flash, Math.NET, NSBasic, Pari/GP, Pocket C, Pocket Scheme, PocketGCC, SCM Scheme Pocket Excel, PTab
Mathematical and scientific applications In 2002, Project Numina researchers began investigating the practicality of using handhelds for mathematics and scientific instruction. At the time, few such applications existed for these devices, but several have appeared in recent years. The most useful are designed for Windows-based PPCs, although a few are available for the Palm OS and Linux-based PDAs such as the Sharp Zaurus. Figure 3 shows screenshots from four mathematical and scientific applications used at UNCW: Pocket HyperChem (www.hyper.com/products/ PocketPC), RDcalc (http://ravend.com), Pocket Oscillator (http://bofinit.com), and HandDee Spectrum Analyzer (www.phonature.com:8092/home/ products_mpApp_HandDeeSA.htm). To illustrate the growing range of PPC software available today, Table 2 provides a selection of mathematical applications, which range from calculators to spreadsheets to graphing utilities (a more comprehensive listing is available at http:// people.uncw.edu/hermanr/TechFiles/PPCMathSite. htm). It is worth noting, however, that many of the more esoteric applications ported to PPCs have installation and programming requirements beyond the average student’s expertise. Project Numina members have also experimented with using handhelds as data-gathering devices in mathematics and science classes. For example, one study required students to collect spring and beam oscillation data on an HP Jornada 720 using a Data Harvest interface with distance probes and then analyze the data to solve a governing differential equation.9,10
STUDY RESULTS Project Numina has conducted numerous studies related to the use of handheld computers in the classroom. One in-house study of the SRS found that, in addition to consistently achieving a near 100 percent participation rate during question sessions, the system increases classroom discussion and reduces off-task behavior. In addition, equipment distribution and setup costs little instructional time. Another in-house study examined whether Pocket HyperChem’s interactive capabilities contributed to student learning. Students use the software in chemistry classes to build 3D models of molecules, optimize their geometries, and measure bond lengths, bond angles, and other physical properties. We found that students who could rotate molecules on a PPC during classroom activities scored significantly higher on related quiz questions than those who viewed the molecules online with-
Computer algebra systems Data acquisition and analysis Editors Graphing Miscellaneous
Multimedia
Programming
Spreadsheets
out rotation capability. We also used Pocket HyperChem to design multimedia-based molecular modeling questions that required students to perform computations in near-real time on chemistry exams. In 2001, we conducted an in-depth study of the effectiveness of handheld computers and other high-tech tools in a general chemistry laboratory.11 Students used Pocket Word to answer pre- and postlab questions, submitting their work via a File Transfer Protocol (Scotty FTP) application as well as an early version of the SRS to answer questions the instructor posed. They also had access to other PPC applications such as Pocket Excel, GraphData, Pocket Internet Explorer, a scientific calculator, Pocket HyperChem, an e-textbook, and an interactive periodic table. The study revealed that PPC hardware limitations—including minimal I/O options, small screen size, and short battery life—as well as software restrictions made such an intense application of handheld technologies in the classroom cumbersome. For example, students could not reliably print to a local network printer, had trouble moving between multiple open applications, could share documents with lab partners only via e-mail, and could not copy and paste Pocket Excel tables into Pocket Word. The data acquisition software also limited what students could collect and share. Electronic submission of work also posed numerous challenges for instructors, including how to store, access, grade, and return files to the students. Moreover, FTP work submission was not easily June 2005
49
Learning Communities Learning communities gained attention in the US after the publication of Involvement in Learning: Realizing the Potential of Higher Education, by the National Institute of Education’s Study Group on the Conditions of Excellence in Higher Education, in 1984. The report’s authors argued that active engagement in the learning process enhances learning and leads to two fundamental principles: • The amount of student learning and personal development associated with any educational program is directly proportional to the quality and quantity of student involvement. • The effectiveness of any educational policy or practice is directly related to the capacity of that policy or practice to increase student involvement in learning. Unfortunately, traditional learning communities rely solely on direct physical interactions among students, as well as between students and faculty. This requires students to live together in designated residence halls, and they must be available to attend group seminars, study sessions, and meetings at specific times and locations. By supporting a virtual learning community, Project Numina’s mobile learning environment aims to address these time and space limitations.
scalable to multiple instructors, teaching assistants, and class sections.
MOBILE LEARNING ENVIRONMENT These study results prompted Project Numina team members to explore ways to more efficiently implement handheld technologies in classrooms and laboratories and ultimately make anytime, anywhere mobile computing a reality on university and college campuses.
Integrated computing services Supporting truly collaborative learning requires a software infrastructure that seamlessly integrates wired and wireless computing services. Currently, computing services can be broadly categorized as either disaggregated or aggregated. Disaggregated services include individual protocols, portals, or applications such as FTP, e-mail, the Web, chat, instant messaging, Telnet, and Excel and Word. Above this layer of services are aggregated services—multiple-service protocols that typically require a common login, such as Campus Pipeline (www.campuspipeline.com); e-learning systems such as WebCT (www.webct.com), Blackboard (www.blackboard.com), and IBM Lotus Learning Space (www.lotus.com/lotus/offering3. nsf/wdocs/learningspacehome); and real-time collaboration applications such as Microsoft NetMeeting (www.microsoft.com/windows/ netmeeting), FirstClass (www.softarc.com), and myStanford (http://my.stanford.edu). We propose to add atop the aggregated services layer a new integrated services layer to support 50
Computer
multiway communications, teamwork, rich data sharing, and personalized application-level access controls. The lack of such a layer limits the integration of current educational applications. Our preliminary studies demonstrate that the addition of integrated services has great potential.
Mobile learning environment objectives Drawing on the concept of integrated services, Project Numina researchers are developing a Webbased mobile learning environment that will for the first time enable students and faculty to collaborate at any time or in any location, using the most effective educational tools available. In this way, the MLE will support a 24/7 virtual learning community that is flexible, easy to use, and conducive to all types of communication including personal interaction, presentations, group tutoring, seminars, workshops, study groups, and lectures. The “Learning Communities” sidebar describes the pedagogical value of learning communities and limitations of traditional approaches. Figure 4 illustrates the MLE software architecture, which has five key objectives. Support for multiple devices and platforms. The MLE will provide an easy-to-use environment for multiple devices and platforms. Students and faculty will have access to numerous devices including tablet and pocket PCs, notebooks, desktop computers, and smart phones running on Mac OS, Linux, and Windows and connected by both wireless and wired networks. Client-side runtime model. The MLE will be built on a widely available, high-performance, client-side runtime model. Existing Web applications are built upon a legacy HTML model with significant performance limitations. Macromedia Flash MX and other next-generation technologies provide the ideal power and extensibility for building this type of system. An industry leader in providing Webbased, interactive-application development tools, Macromedia estimates that its Flash Player is installed on nearly 98 percent of all computers connected to the Internet. Comprehensive component integration. The MLE will offer a powerful object-oriented model for applications and events that integrates multiway communications, personalized access, collaborative learning, and location-based services in a way that educators of varying skill levels can use. This will create a synergy impossible to achieve with existing commercial applications. Rich data sharing. The MLE will enhance data interchange through the use of Web services cre-
Figure 4. Mobile learning environment software architecture. The MLE seeks to support a 24/7 virtual learning community that seamlessly integrates wired and wireless services.
Virtual learning communities
Services: • Rich data sharing • Collaboration • Multiway communication • Location-aware services
Mobile learning environment
Development environment
Lesson
Learning modules Reusable components
Custom programming
Java
MS applications
Flash
Servers
WSDL
Flash communication server Internet information server HTTP
FTP
Flash remoting Coldfusion
HTTPS
EJB JSP PHP ODBC/JDBC
UDDI/ SOAP ASP interfaces .NET
SOAP Academic computing
Course materials
Student assignments
Personalization and access control database
Student information system
Figure 5. MLE screenshots: (a) Login screen, (b) People Finder service, and (c) Classroom Roster service.
(a)
(b)
ated with XML, SOAP, the Web Services Description Language, and the universal description, discovery, and integration protocol. The MLE currently uses XML messages to dynamically retrieve data within the Flash application. The server-side application logic is based on standard Microsoft Internet Information Services Web server technologies. The server-side applications
(c)
themselves are active server pages that can send and receive XML-formatted messages to clientbased flash applications. Component reusability. Finally, the MLE hopes to shorten the development cycle, maintain a consistent and easy-to-use interface, and increase reliability through the use of reusable components. June 2005
51
Sample MLE applications Figure 5 provides iPAQ PPC screenshots of the MLE login screen and two service applications we have completed. People Finder lets users determine the location of another user connected to the wireless network. Classroom Roster allows faculty and students in a common course to easily contact and communicate with one another, demonstrating MLE’s ability to tie into back-end university student information systems. At the bottom of the screen is a set of icons that the user can activate to bring up menus for carrying out various functions. For example, File transfer sends files from the local device to other devices or to any device from the remote server. People locates users logged into the system. Classroom and course launches various communications options including text, audio, video, and whiteboard and supports one-to-many, many-to-one, and one-toone communication. Services supports printing, location-aware services, software updates, alert services, user-profile editing, and MLE version and copyright information.
espite the technological revolution in information access and communication, the promise of the virtual classroom remains largely unfulfilled. While many faculty employ technology in their teaching, few courses offer more than online syllabi, study sheets, group discussion boards, and e-mail. Hardware and software constraints continue to limit more sophisticated strategies such as virtual help sessions or electronically graded homework, and few instructors are willing to introduce more than one of these technologies in any given class. Existing learning systems do not provide rich data sharing and location-aware services, and multiway communication is usually limited to text. Moreover, the information these products store cannot be easily integrated with the data on the instructor’s computer. Finally, the systems are not designed for mobile devices, the most promising technology for anytime, anywhere learning. Project Numina’s mobile learning environment attempts to address these deficiencies by supporting virtual learning communities. Its modular construction and reliance on readily available commercial software should make the MLE relatively easy to implement at other institutions. Several additional software applications are under development, and future plans call for integrating the MLE more fully with core campus data services to
D
52
Computer
build more useful value-added educational offerings. ■
Acknowledgments This work was carried out in the Laboratory for Research on Mobile Learning Environments at the University of North Carolina Wilmington. Special thanks to the technical staff in the UNCW Information Technology Systems Division for their support and help in providing the necessary access rights to the campus network to make this project possible.
References 1. P. Asay, “Tablet PCs: The Killer App for Higher Education,” Syllabus, Apr. 2002; www.campustechnology.com/article.asp?id=6246. 2. J.V. Boettcher, “The Spirit of Invention: Edging Our Way to 21st Century Teaching,” Syllabus, June 2001; www.syllabus.com/article.asp?id=3687. 3. V. Crawford et al., Palm Education Pioneers Program: March 2002 Evaluation Report, Palm, 2002; www.palmgrants.sri.com/PEP_R2_Report.pdf. 4. M.A.C. Fallon, “Handheld Devices: Toward a More Mobile Campus,” Syllabus, Nov. 2002; www.campustechnology.com/article.asp?id=6896. 5. “University of Louisville: Med Schools Integrate Handhelds,” Syllabus, Feb. 2003; www.campustechnology.com/article.asp?id=7261. 6. “The 2003 National Survey of Information Technology in US Higher Education: Campus Portals Make Progress, Technology Budgets Suffer Significant Cuts,” The Campus Computing Project, Oct. 2002; www.campuscomputing.net/summaries/2002. 7. P.S. Shotsberger and R. Vetter, “Teaching and Learning in the Wireless Classroom,” Computer, Mar. 2001, pp. 110-111. 8. R.A. Burnstein and L.M. Lederman, “Comparison of Different Commercial Wireless Keypad Systems,” The Physics Teacher, May 2003, pp. 272-275. 9. G. Lugo and R. Herman, “Mathematics on Pocket PCs,” Proc. 15th Ann. Int’l Conf. Technology in Collegiate Mathematics (ICTCM 15), Addison-Wesley, 2002, pp. 123-127. 10. G. Lugo and R. Herman, “Inverse Problems for Vibrating Beams,” Proc. 15th Ann. Int’l Conf. Technology in Collegiate Mathematics (ICTCM 15), Addison-Wesley, 2002, pp. 128-132. 11. J.L.P. Bishoff, “An Analysis of the Use of Handheld Computers in the Chemistry Laboratory Setting,” master’s thesis, Dept. of Chemistry and Biochemistry, Univ. of North Carolina Wilmington, 2002.
Barbara P. Heath is a managing member of East Main Educational Consulting, LLC (www. emeconline.com), in Southport, North Carolina. Her research interests include assessing the use and impact of instructional technology and the evaluation of distance-learning programs. Heath received a PhD in science education from North Carolina State University. Contact her at
[email protected]. Russell L. Herman is an associate professor in the Department of Mathematics and Statistics at the University of North Carolina Wilmington (UNCW), where he is also a faculty associate at the Center for Teaching Excellence. His research interests include nonlinear evolution equations, soliton perturbation theory, fluid dynamics, relativity, quantum mechanics, and instructional technology. Herman received a PhD in physics from Clarkson University. He is a member of the Mathematical Association of America (MAA), the American Mathematical Society (AMS), the American Physical Society (APS), and the Society of Industrial and Applied Mathematics. Contact him at hermanr@ uncw.edu. Gabriel G. Lugo is an associate professor in the Department of Mathematics and Statistics at UNCW. His research interests include general relativity, differential geometry and its applications to mathematical physics, hypermedia, and instructional technology. Lugo received a PhD in mathematical physics from the University of California,
Berkeley. He is a member of the MAA, the AMS, and the APS. Contact him at
[email protected]. James H. Reeves is an associate professor in the Department of Chemistry and Biochemistry at UNCW. His research interests are in designing chemistry laboratory exercises for distance-learning courses and instructional technology. Reeves received a PhD in chemistry from Northeastern University. He is a member of the American Chemical Society. Contact him at
[email protected]. Ronald J. Vetter is a professor and chair of the Department of Computer Science at UNCW. His research interests include parallel and distributed computing, mobile computing, wireless networks, and Web-based distance education. Vetter received a PhD in computer science from the University of Minnesota. He is a senior member of the IEEE Computer Society and the IEEE Communication Society and a member of the ACM. Contact him at
[email protected]. Charles R. Ward is a professor of chemistry and computer science and chair of the Department of Chemistry and Biochemistry at UNCW. His research interests include applications of constructivist theory to teaching science and instructional technology. Ward received a PhD in science education from Purdue University. He is a member of the American Chemical Society. Contact him at
[email protected].
Not A Member Yet? Join the
IEEE Computer Society for Valuable Benefits and Programs! Get access to the latest technical information, professional development options, networking opportunities, and more from the leading society for computing and information technology professionals.
Grids Multimedia
DISTANCE LEARNING CAMPUS 100 ONLINE BOOKS MAGAZINES AND JOURNALS
Semantic Web
CONFERENCES DIGITAL LIBRARY LOCAL SOCIETY CHAPTERS
Security & Privacy
CERTIFICATION
Distributed Systems Join Today! www.computer.org/join
Wireless Technologies Software Development ...and much more!
2005 IEEE Computer Society Professional Membership/Subscription Application Membership and periodical subscriptions are annualized to and expire on 31 December 2005. Pay full or half-year rate depending upon the date of receipt by the IEEE Computer Society as indicated below.
Membership Options*
FULL YEAR HALF YEAR Applications received Applications received 16 Aug 04 - 28 Feb 05 1 Mar 05 - 15 Aug 05
All prices are quoted in U.S. dollars
1 I do not belong to the IEEE, and I want to join just the Computer Society $ 102 ❑
$51 ❑
2 I want to join both the Computer Society and the IEEE: I reside in the United States I reside in Canada I reside in Africa/Europe/Middle East I reside in Latin America I reside in Asia/Pacific
$195 ❑ $175 ❑ $171 ❑ $164 ❑ $165 ❑
$98 ❑ $88 ❑ $86 ❑ $82 ❑ $83 ❑
3 I already belong to the IEEE, and I want to join the Computer Society.
$ 44 ❑
$22 ❑
(IEEE members need only furnish name, address, and IEEE number with payment.)
Are you now or were you ever a member of the IEEE? Yes ❑ No ❑ If yes, provide member number if known: _______________
ISSUES PER YEAR
Add Periodicals** BEST DEAL
IEEE Computer Society Digital Library Computing in Science and Engineering IEEE Computer Graphics and Applications IEEE Design & Test of Computers IEEE Intelligent Systems IEEE Internet Computing IT Professional IEEE Micro IEEE MultiMedia IEEE Pervasive Computing IEEE Security & Privacy IEEE Software IEEE/ACM Transactions on Computational Biology and Bioinformatics IEEE/ACM Transactions on Networking† IEEE Transactions on: Computers Dependable and Secure Computing Information Technology in Biomedicine† Knowledge and Data Engineering Mobile Computing Multimedia† NanoBioscience†
Parallel and Distributed Systems Pattern Analysis and Machine Intelligence Software Engineering Visualization and Computer Graphics VLSI Systems† IEEE Annals of the History of Computing
FULL YEAR Applications received 16 Aug 04 - 28 Feb 05 PRINT
ELECTRONIC
COMBO
Payment required with application
Membership fee $ Periodicals total $ Applicable sales tax*** $ Total $
__________ __________ __________ __________
Enclosed: ❑ Check/Money Order**** Charge my: ❑ MasterCard ❑ Visa ❑ American Express ❑ Diner’s Club ___________________________________________ Card number
HALF YEAR Applications received 1 Mar 05 - 15 Aug 05 PRINT
ELECTRONIC
COMBO
n/a n/a $118 ❑ 6 $42 ❑ $40 ❑ 6 $39 ❑ $31 ❑ 6 $37 ❑ $30 ❑ 6 $37 ❑ $30 ❑ 6 $39 ❑ $31 ❑ 6 $40 ❑ $32 ❑ 6 $37 ❑ $30 ❑ 4 $35 ❑ $28 ❑ 4 $41 ❑ $33 ❑ 6 $41 ❑ $33 ❑ 6 $44 ❑ $35 ❑
n/a $55 ❑ $51 ❑ $48 ❑ $48 ❑ $51 ❑ $52 ❑ $48 ❑ $46 ❑ $53 ❑ $53 ❑ $57 ❑
n/a $21 ❑ $20 ❑ $19 ❑ $19 ❑ $20 ❑ $20 ❑ $19 ❑ $18 ❑ $21 ❑ $21 ❑ $22 ❑
$59 ❑ $20 ❑ $16 ❑ $15 ❑ $15 ❑ $16 ❑ $16 ❑ $15 ❑ $14 ❑ $17 ❑ $17 ❑ $18 ❑
n/a $28 ❑ $26 ❑ $24 ❑ $24 ❑ $26 ❑ $26 ❑ $24 ❑ $23 ❑ $27 ❑ $27 ❑ $29 ❑
4 $35 ❑ 6 $44 ❑
$28 ❑ $33 ❑
$46 ❑ $55 ❑
$18 ❑ $22 ❑
$14 ❑ $17 ❑
$23 ❑ $28 ❑
$41 ❑ $31 ❑ $45 ❑ $43 ❑ $32 ❑ n/a $40 ❑ $40 ❑ $44 ❑ $38 ❑ $34 ❑ n/a $31 ❑
$33 ❑ $25 ❑ $35 ❑ $34 ❑ $26 ❑ n/a $30 ❑ $32 ❑ $35 ❑ $30 ❑ $27 ❑ n/a $25 ❑
$53 ❑ $40 ❑ $54 ❑ $56 ❑ $42 ❑ $40 ❑ $48 ❑ $52 ❑ $57 ❑ $49 ❑ $44 ❑ $28 ❑ $40 ❑
$21 ❑ $16 ❑ $23 ❑ $22 ❑ $16 ❑ n/a $20 ❑ $20 ❑ $22 ❑ $19 ❑ $17 ❑ n/a $16 ❑
$17 ❑ $13 ❑ n/a $17 ❑ $13 ❑ n/a n/a $16 ❑ $18 ❑ $15 ❑ $14 ❑ n/a $13 ❑
$27 ❑ $20 ❑ $27 ❑ $28 ❑ $21 ❑ n/a $24 ❑ $26 ❑ $29 ❑ $25 ❑ $22 ❑ $14 ❑ $20 ❑
12 4 4 12 6 6 4 12 12 12 6 12 4
Payment Information
Choose PRINT for paper issues delivered via normal postal channels. Choose ELECTRONIC for 2005 online access to all issues published from 1988 forward. Choose COMBO for both print and electronic.
IF5F
___________________________________________ Expiration date (month/year) ___________________________________________ Signature
USA-only include 5-digit billing zip code
■■■■■
* Member dues include $19 for a 12-month subscription to Computer. ** Periodicals purchased at member prices are for the member’s personal use only. *** Canadian residents add 15% HST or 7% GST to total. AL, AZ, CO, DC, NM, and WV add sales tax to all periodicals. GA, IN, KY, MD, and MO add sales tax to print and combo periodicals. NY add sales tax to electronic and combo periodicals. European Union residents add VAT tax to electronic periodicals. **** Payable to the IEEE in U.S. dollars drawn on a U.S. bank account. Please include member name and number (if known) on your check. † Not part of the IEEE Computer Society Digital Library. Electronic access is through www.ieee.org/ieeexplore.
For fastest service, apply online at www.computer.org/join NOTE: In order for us to process your application, you must complete and return BOTH sides of this form to the office nearest you:
Asia/Pacific Office IEEE Computer Society Watanabe Bldg. 1-4-2 Minami-Aoyama Minato-ku, Tokyo 107-0062 Japan Phone: +81 3 3408 3118 Fax: +81 3 3408 3553 E-mail:
[email protected]
Publications Office IEEE Computer Society 10662 Los Vaqueros Circle PO Box 3014 Los Alamitos, CA 90720-1314 USA Phone: +1 800 272 6657 (USA and Canada) Phone: +1 714 821 8380 (worldwide) Fax: +1 714 821 4641 E-mail:
[email protected] Allow up to 8 weeks to complete application processing. Allow a minimum of 6 to 10 weeks for delivery of print periodicals.
BPA Information
Personal Information Enter your name as you want it to appear on correspondence. As a key identifier in our database, circle your last/surname. Male ❑
Female ❑
Title
First name
Date of birth (Day/Month/Year) Middle
A. Primary line of business 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Last/Surname
Home address City
State/Province
Postal code
Country
Home telephone
Home facsimile
Preferred e-mail Send mail to:
❑ Home address ❑ Business address
17. 18.
Educational Information First professional degree completed
Month/Year degree received
Program major/course of study College/University
This information is used by society magazines to verify their annual circulation. Please refer to the audit codes and indicate your selections in the box provided.
State/Province
Country
19. 20. 21. 22. 23. 24. 25. 26. 27. 28.
Computers Computer peripheral equipment Software Office and business machines Test, measurement and instrumentation equipment Communications systems and equipment Navigation and guidance systems and equipment Consumer electronics/appliances Industrial equipment, controls and systems ICs and microprocessors Semiconductors, components, sub-assemblies, materials and supplies Aircraft, missiles, space and ground support equipment Oceanography and support equipment Medical electronic equipment OEM incorporating electronics in their end product (not elsewhere classified) Independent and university research, test and design laboratories and consultants (not connected with a manufacturing company) Government agencies and armed forces Companies using and/or incorporating any electronic products in their manufacturing, processing, research, or development activities Telecommunications services, telephone (including cellular) Broadcast services (TV, cable, radio) Transportation services (airlines, railroads, etc.) Computer and communications and data processing services Power production, generation, transmission, and distribution Other commercial users of electrical, electronic equipment and services (not elsewhere classified) Distributor (reseller, wholesaler, retailer) University, college/other education institutions, libraries Retired Others (allied to this field)
B. Principal job function Highest technical degree received
Program/Course of study
Month/Year received College/University
State/Province
Country
Business/Professional Information Title/Position Years in current position
Years of practice since graduation
Employer name
Department/Division
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.
General and corporate management Engineering management Project engineering management Research and development management Design engineering management - analog Design engineering management - digital Research and development engineering Design/development engineering - analog Design/development engineering - digital Hardware engineering Software design/development Computer science Science/physics/mathematics Engineering (not elsewhere classified) Marketing/sales/purchasing Consulting Education/teaching Retired Other
C. Principal responsibility Street address
City
Postal code
Country
Office phone
Office facsimile
State/Province
I hereby make application for Computer Society and/or IEEE membership and agree to be governed by IEEE’s Constitution, Bylaws, Statements of Policies and Procedures, and Code of Ethics. I authorize release of information related to this application to determine my qualifications for membership.
Signature APPLICATION MUST BE SIGNED
Date
NOTE: In order for us to process your application, you must complete and return both sides of this form.
1. 2. 3. 4. 5. 6. 7. 8. 9.
Engineering or scientific management Management other than engineering Engineering design Engineering Software: science/management/engineering Education/teaching Consulting Retired Other
D. Title 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
Chairman of the Board/President/CEO Owner/Partner General Manager V.P. Operations V.P. Engineering/Director Engineering Chief Engineer/Chief Scientist Engineering Manager Scientific Manager Member of Technical Staff Design Engineering Manager Design Engineer Hardware Engineer Software Engineer Computer Scientist Dean/Professor/Instructor Consultant Retired Other Professional/Technical
COVER FEATURE
Architecting Multimedia Environments for Teaching Thus far, developers have created only partial solutions for using computational equipment in education. Research must focus more effort on developing architectures capable of combining technologies that target the classroom and that allow specifying “what” rather than “how” tasks should be done.
Gerald Friedland Karl Pauls Freie Universität Berlin
A
lthough surrounded by today’s many technological enhancements, teachers remain utterly alone in front of their classes. Even in 2005, most teachers still rely on well-established primitive aids. For example, the chalkboard—one of history’s earliest teaching tools—remains the preferred exposition medium in many scientific disciplines. Since the advent of computational devices in education, researchers have sought the means for properly integrating them and taking advantage of their capabilities. The difficult task of architecting multimedia environments for teaching must start with a needs analysis. The most challenging task involves warranting reliability on the one hand, while accommodating opportunities for innovation on the other. Thus, we propose building a reliable, ubiquitous, adaptable, and easy-to-use technology-integrating black box. Placing this system atop a service-oriented component model implemented on a platform-independent layer such as a virtual machine will provide the adaptability developers need. Loosely coupled components will accommodate a nonmonolithic approach and ease reuse. By reusing and enhancing components, the system will become increasingly reliable, while a building-block architecture will keep it manageable.
WHAT TEACHERS HAVE The demand for computational equipment to use in education is surging. Several partial solutions 0018-9162/05/$20.00 © 2005 IEEE
already exist, but no one so far has put forth a global vision for using this technology. Nor have researchers devoted much effort to developing architectures that can combine technologies focused on the classroom with easily used designs. To date, three e-learning approaches predominate: • intensive use of slide show presentations; • video recording lectures transmitted via fiber optics and, more recently, Internet broadcasting; and • the creation of e-learning modules such as dynamic Web pages, flash animations, or Java applets. Slide show presentations enable good visualization and smooth lecture performance. The instructor plans the presentation’s structure up front, taking into account all required resources. Visual elements such as tables, diagrams, or images can be directly presented to the audience. Further, computer-generated slides can be printed out so that students don’t need to copy the content for later review. However, slide show presentations often appear static because everything must be planned in advance, leaving few possibilities for the teacher to adapt the content in interaction with the students. Usually, slides present content in note form, structured as bullet-point lists, which dramatically restricts the lecturer’s freedom of expression. Often, the instructor must deliver information out-of-band
Published by the IEEE Computer Society
June 2005
57
and unencapsulated in the slides because drawing diagrams by hand is easier and more Slide shows, video intuitive and doing so does not require hours recordings, and of preparation. e-learning modules Students sometimes feel overwhelmed when a huge number of slides are displayed result from the in rapid succession. Especially in math and search for answers physics, the journey is the reward, so students to the educational need to focus not on the results presented in field’s most pressing the slides but on the development of the thoughts that led to them. concerns. Recording a video of the entire lecture— including a picture of the board and the lecturer, along with an audio track—lets students follow the lecture remotely and recall previous sessions. To record or transmit classes, standard Internet video broadcasting systems have become popular. This allows taking advantage of the availability and straightforward handling of state-of-the-art video broadcasting software. Existing solutions either focus on recording and transmitting a session or using videoconferencing tools to establish a bidirectional connection, or feedback channel. This approach does not support the teaching process as such, and it also introduces additional issues. Technical staff must be on hand at least for setup and maintenance. The more professional approaches require camera work and audio recording personnel. Standard video Webcast tools make the recording’s technical quality inappropriate for educational content. Text and drawings, either from slides or the chalkboard, tend to be poorly encoded either because video compression omits sharp edges or because delivering the content smoothly requires a high-bandwidth Internet connection, and some students cannot easily access the video from a remote location. Educational mini-applications such as dynamic Web pages, flash animations, or Java applets can be used for presentation as well as for individual training by the student at home without imposing restrictions on the content or representation. However, using conventional authoring systems, the ratio of production time to the duration of the produced learning unit tends to be wildly disproportionate, mainly because traditional teaching know-how does not easily match contemporary authoring tools.1 In addition to technical efforts, using these units to structure didactical content for the Web requires a huge amount of work. Given the lack of a standard, the implementation and interfacing possibilities vary as much as their purposes. This makes reuse or adoption by third parties almost infeasi58
Computer
ble, while reassembly usually requires a complete rewrite.
WHAT TEACHERS NEED It makes sense to assume that slide shows, video recordings, and e-learning modules directly result from the search for answers to the educational field’s most pressing concerns. An analysis points directly to the following needs: • support for classroom teaching that includes the possibility of integrating educational miniapplications; • tools that support the preparation of classroom teaching; and • synchronous and asynchronous remote-teaching support. Clearly, given its status as their core job, teachers must consider classroom instruction their first priority. This makes it likely they will refuse any tool that does not leverage the experience and practical knowledge they have gained in a lifetime of teaching. An excellent teacher should remain excellent whether aided by electronic devices or not. Further, becoming familiar with a given technology should require little time. Any tool must thus ensure that it conforms to the teacher’s established working habits while adding value to the teaching experience. Given that every teacher has a different perception of what constitutes a good lecture, instructors must be able to change tools according to their preferences and ideas. A smooth lecture performance depends on the quality of the preparation, which in turn consists of gathering content, structuring the lecture, and preparing didactic elements such as charts, figures, and pictures. Multimedia elements can be useful didactic tools in this context. The increasing use of such elements requires even greater preparation effort. To avoid redundant work—such as presetting the lecture on paper— computer-based education tools must support this process conveniently. This involves preserving as much freedom as possible while allowing as much structure as needed. A teacher should retain control over the amount, order, and elements of the material to be included in the lecture. An experienced teacher, for example, can hold an excellent lecture spontaneously, backed by life experience only. Synchronous remote teaching, such as videoconferencing, can provide courses that resource constraints would make impossible otherwise. In addition to helping students recall past lecture con-
tent, asynchronous remote teaching provides benefits that assist students in reviewing past lectures to catch up on missed content or prepare for examinations, and, if they are physically impaired, gives them greater overall access to lectures. Additionally, institutions promote lecture recording and broadcasting because they anticipate that these archives will enhance the institution’s knowledge base and enhance its prestige. Synchronous or asynchronous teaching offers a valuable enhancement for students, but teachers will implement it only if doing so entails as little overhead as possible. In essence, teachers need a tool that substantially assists in their preparation and delivery of lectures, allows easy integration of various didactic media, and optionally supports synchronous remote teaching with little or no lecturer overhead. This involves realizing the twin priorities of providing the highest amount of assistance on one hand while permitting the greatest degree of freedom during class on the other.
Realistic expectations
Schools and universities form a sprawling, heterogeneous playground. Developers who want their software system to achieve sustained success must build it to survive in an environment that consists of a variety of software and hardware configurations. More importantly, they must make it adaptable to different software ideologies by, for example, avoiding design decisions based on their political biases toward various operating systems.
Vendor advertising claims to the contrary, Developers who lecture halls or seminar rooms are not prowant their software fessional recording studios. Consequently, system to achieve most current systems will not yield professional-quality recordings just by plugging a sustained success microphone into a sound card and starting must build it to the lecture. A realistic approach should also survive in an emphasize reliability: Systems must continue environment that working after their individual parts fail, at consists of a variety least at the level of providing switchover or backup functionality. of configurations. Successful remote teaching requires an awareness of the targeted students’ technical prerequisites and takes into account future technological or target-group shifts. For example, following a remote lecture for the first time presents a formidable psychological barrier when the participant must first install the client software. We can’t assume that all students have an Internet connection, let alone a high-bandwidth connection. For example, a 2003 survey of engineering students in Berlin revealed that although 93 percent had Internet connectivity, more than 50 percent had only a modem connection.1 Given these statistics, we advise broadcasting at different quality levels, splitting the content into different streams, and providing the remote viewer with the choice of turning off individual audio, video, or slide streams for a given broadcast. Additionally, content should be distributable by offline means such as DVD.
Baseline requirements
Seeking sustainability
The software must fit the existing hardware infrastructure and should be easily adaptable to working with other multimedia applications. Developers should also avoid excessive concentration on the construction of proprietary specialized solutions, a trend that has resulted in the notable absence of generic, sustainable, and reusable approaches to e-learning. The hardware and software that support classroom teaching must eliminate as much overhead as possible. Teachers who enhance their lessons with computing devices should still be able to step into the classroom and start lecturing as usual. Both teachers and students must take the technology for granted—which cannot occur until it becomes ubiquitous. This, however, requires the seamless integration of structural changes or functional enhancements—such as the addition of new media, changes in technical formats, or simply an upgrade to new hardware.
The more specialized a solution, the lower the probability of its reuse—which directly affects its sustainability. Further, just as in software engineering, monolithic approaches hinder partial reuse. For example, extracting individual slides from a presentation recorded entirely as compressed video can be cumbersome. Additionally, the content of many course topics, curricula, and presentations undergoes rapid changes because of technological innovations, legislative alterations, or cultural developments. Therefore, development should focus on creating content rather than administering it. Exclusively building content-management systems or performing excessive research on metadata will not help to achieve this goal nor reduce costs. A proposed software environment for multimedia-enhanced teaching should offer more than just the management of individual educational units. It must provide a complete solution that allows the
SURVEYING THE JUNGLE
June 2005
59
Figure 1. SelfOrganizing Processing and Streaming Architecture (SOPA). This framework manages multimedia processing and streaming components organized in a flow graph that features autonomous assembly of stream-processing components.
SOPA-using applications SOPA Component discovery and deployment Component framework Execution platform
integration of educational mini-applications and fundamentally supports the creation and distribution of generated content. Integrating technologies for teaching in the targeted environments while supporting reusability and cooperation between different organizations and systems requires high flexibility on the one hand and high reliability on the other. Ideally, solutions will integrate seamlessly into a teacher’s workflow using the hardware the institution provides, while available personnel can easily manage configuration.
INSIDE THE BLACK BOX What does this reliable, ubiquitous, adaptable, and easy-to-use technology-integrating black box look like from the inside? The component-based software engineering approaches proposed over the past several years for application areas include the tools built with the Eclipse Rich Client Platform (www.eclipse.org). We propose that these approaches offer a perfect solution for creating classroom-supporting applications. Further, service orientation can provide loose coupling between components inside a specific framework,2,3 which allows for their dynamic download and deployment from remote sources. These components subsequently provide their services in the local framework and add functionality to an application during runtime, as needed.
Generic classroom installation In a generic scenario, after installing the classroom software, users configure it according to the application’s specific needs. To achieve this, they use tools that specify what will be used rather than how it will be implemented. Subsequently, the system analyzes its environment, downloads components from remote repositories as needed, and assembles these services into a composition that provides the required functionality. An audio recording wizard provides one example of such a configuration and environmental analysis tool.4 An expert system presented via a GUI wizard guides the user through the systematic 60
Computer
setup and test of the sound equipment. The result, an initial modified composition description, contains a set of filtering services needed for the preand postprocessing of a given recording. While the lecture is being recorded, the system monitors and controls important parts of the sound hardware. For example, it detects and reports a range of handling errors and hardware failures. The system also simulates and automatically operates standard recording-studio equipment such as graphic equalizers, noise gates, and compressors.
Remote teaching Another example, a whiteboard application, captures participants’ written text, then stores and transmits this data to interested clients over the Internet. To provide additional functionality, the teacher uses a wizard to choose from a list of services—provided by locally or remotely available components—that can be seamlessly integrated into the lecture. When the lecturer chooses a certain service, such as a bubble-sort visualization mini-application, the actual component downloads and the whiteboard core displays the service. The same service can be used on the client side, possibly provided by a different component. In a video-streaming scenario, various services could be used to convert the content into different formats. A receptor service receives the incoming connection requests, then chooses and configures the right converter services to mediate between the captured video type and the format type that can display it on the client’s software. The availability of downloadable components provides one of our proposed approach’s most beneficial elements. Yet the question of how to build these remote repositories remains unanswered. Given the generic approach used to build components that provide their functionality via services, one task is to define service contracts in the form of interface descriptions. After this, arbitrary parties can share their component implementations in those repositories. The information specified in a service contract must be decided in the context of the specific domain. Given this restriction, syntactical interface descriptions combined with a few metadata properties will suffice in most cases. The approach we propose may seem abstract, but it can be implemented pragmatically using established technologies.
SOPA As Figure 1 illustrates, the Self-Organizing Processing and Streaming Architecture (SOPA) (www.
L: videosource T: source
L: videocoder T: pipe
L: filewriter T: pipe
L: videoserver T: target
L: audiosource T: source
L: fork T: fork
L: vumeter T: pipe
L: micdetect T: pipe
L: noisegate T: pipe
L: audiocoder T: pipe
sopa.inf.fu-berlin.de) works well with our approach. This framework manages multimedia processing and streaming components organized in a flow graph that features autonomous assembly of stream-processing components.5 The dynamically organized processing graphs use components from various distributed sources. SOPA installs these components on the fly according to a graph’s requirements, which derive from its specific purpose and are assembled according to an application’s needs. An XML file describes the graph that glues these components together. SOPA uses general properties to describe nodes— for example, to tell the system to select an audio codec that compresses to a certain bandwidth. The graph’s structure can be changed and its nodes updated while the system runs, as Figure 2 shows. The implementation uses Oscar (http://oscar. objectWeb.org), Richard S. Hall’s open source implementation of the OSGi framework,6 as the underlying component model. SOPA currently focuses on multimedia-processing and Internet-streaming applications on either the server or client side. On the server side—on a video streaming server, for example—SOPA integrates and manages codecs according to the connecting client’s capabilities at runtime. Further, it supports seamless and transparent reconfigurations and updates to the client by dynamically adapting stream processing to user demands. For each specific demand, SOPA uses Eureka—a Rendezvous-based component discovery and deployment engine7—to assemble a special processing graph that uses components discovered locally or in remote repositories. When accessing remote repositories, SOPA retrieves the requisite components from the remote source and deploys them locally. SOPA ultimately seeks to ease the development of applications that need an extensible streaming and processing layer while also decreasing the administrative maintenance workload. More specifically, SOPA provides a round-up solution that serves as an extensible framework for manag-
L: filewriter1 T: pipe
Figure 2. A streaming graph automatically assembled by SOPA to transmit and filter an audio and video broadcast.
L: audioserver T: target
ing multimedia components. SOPA synchronizes different independent multimedia streams, such as slides and video streams, and uses an applicationindependent approach to describe the handling of concrete content such as converting from one multimedia format to another.
E-CHALK The E-Chalk software system (www.e-chalk.de) captures and transmits chalkboard-based lectures over the Internet. Conceived and supervised by Raúl Rojas and developed by his group at the Freie Universität Berlin,8-10 E-Chalk enhances classroom teaching by integrating the multimedia features of modern presentation software with the traditional chalkboard. The software simulates a chalkboard using a touch-sensitive computer screen on which the lecturer works using a pen. This approach preserves the didactical properties of the traditional chalkboard while helping instructors create modern multimedia-based lectures. Inside E-Chalk, SOPA handles on-the-fly streaming and recording of lectures. E-Chalk comes with a set of nodes that can handle and convert different media types and the necessary XML graph description. When a teacher starts E-Chalk to record or transmit a lecture, the media graph has already been built and the system is already live and online. During media graph construction, the system also checks for updates. At this point, the system updates any nodes that require it. Using SOPA, students also could receive E-Chalk streams using QuickTime or Windows Media Player. To accomplish this, a receptor—a generic media node—waits for an incoming connection and checks its type. Depending on the connecting client’s type, it restructures the given media graph, performs searches, and downloads nodes that support the new media types, using Eureka if necessary. The E-Chalk software works with a variety of hardware components that instructors can substitute for the traditional chalkboard. For example, a lecturer can write on a digitizer tablet or on a June 2005
61
Figure 3. E-Chalk system used in a lecture hall at Technical University Berlin.
Figure 4. An E-Chalk lecture replayed on a mobile device, with audio and board content replayed at 16 Kbps.
tablet PC, using an LCD projector to display the computer screen’s content against a wall. Instructors also can use digitizing whiteboards, like those shown in Figure 3, while an LCD projector displays the screen content on a suitable surface. The software transforms the screen into a black surface upon which the instructor can draw using different colors and pen thicknesses. Scrolling up and down on the board vertically provides an unlimited writing surface. The lecturer can use an eraser to delete part or all of the board’s content. Images from the Web or a local hard disk drive can 62
Computer
be placed on the board during a lecture. E-Chalk provides access to CGI scripts to help implement the interfacing of Web services. The instructor also can use educational miniapplications in the form of Java applets pulled from the Internet on the board. When the lecturer uses a reserved handwriting-recognition color to draw strokes on the board, the handwritten input passes to a mathematical formula recognizer. The recognizer transforms the input and passes it to other components such as the Mathematica or Maple interface. This can be useful for presenting partial results or to annotate the plot of a certain mathematical function. When the lecturer needs a computation’s result, Mathematica or Maple answers with text or an image. E-Chalk provides another means for interacting with third-party components: Chalklets. These applications interact only through strokes—they recognize drawings and gestures from the screen and respond by drawing their results on the board. When an E-Chalk lecture closes, the system automatically generates a PDF transcription of the board content. The transcription can be generated either in color or in black and white. Macros—E-Chalk’s means for preparing lectures in advance—provide a prerecorded series of events that an instructor can call up and replay on the board during a lecture. To do this, the instructor draws the portions of the lecture to be stored in advance. During the lecture, the macros replay either at their original speed or at an accelerated rate determined by the user. Automatically generated macros can be used for visualization. E-Chalk records all events from the screen or tablet, together with the lecturer’s voice and an optional video. The lecture can also be transmitted live over the Internet and synchronized with videoconferencing systems, such as Polycom ViaVideo, for student feedback. Remote users connect to the E-Chalk server to view everything as seen in the classroom. They can choose to receive the audio and, optionally, a small video of the teacher. A complete E-Chalk lecture with dynamic blackboard image, audio, and video can be maintained with a connection speed of roughly 128 Kbps. Without the video stream, the required connection bandwidth drops to at most 64 Kbps. Java-based playback provides the most convenient means for following a lecture. In this case, the viewer requires nothing more than a Java-enabled Web browser. It isn’t necessary to install a plug-in or client software manually.
Other options include following the lecture in MPEG-2 format on a DVD, a Java-enabled PDA, or a third-generation mobile phone that runs RealPlayer, as Figure 4 shows. When viewing archived lectures, the remote user sees a control console like the one shown in Figure 5 and uses typical VCR tools such as pausing, fast-forwarding, and rewinding to regulate the content flow.
EXYMEN The Exymen software framework (www.exymen. org), shown in Figure 6, seeks to fill the role of a universal cross-platform multimedia editor11 by becoming the rapid prototyping tool of choice for developers of new media formats and codecs. Exymen’s developers also promote the tool by offering it as a free download for editing media content. The editor provides a cross-platform GUI and defines generic data structures and operations for handling time-based media abstractly. Developers can plug in components that fill the abstract data structures with content by providing concrete format-specific handlers. These extensions can be loaded and updated without workflow interruption at runtime. Components can use both the framework and other components, assisting extension development through software reuse. Developers can provide components to the system from remote locations using Eureka. Several plug-in components have already been developed for Exymen, while all types of E-Chalk content can be edited individually or synchronously. Exymen can edit audio content supported by the Java Media Framework—such as wav files, QuickTime, or MP3 formats. It also can edit and synchronize Web-exported PowerPoint slides. Exymen can use SOPA’s media nodes for editing archived lectures because it shares the local component cache with SOPA. A dedicated Exymen component reads in media graph descriptions to find conversion paths. any open questions remain concerning our approach to architecting multimedia environments for teaching, but we believe that by using our requirements analysis to classify proposed aids and by building solutions that adhere to our proposals, many mistakes and much futile or redundant work can be avoided. ■
M
Figure 5. A lecture recorded with the E-Chalk system and replayed in a Javaenabled Web browser.
Figure 6. Exymen software framework, used here to edit a recorded E-Chalk lecture.
contributed to the system, including Kristian Jantz, Ernesto Tapia, Christian Zick, Wolf-Ulrich Raffel, Mary-Ann Brennan, Margarita Esponda, and— most noticeably—Lars Knipping in his doctoral dissertation.
References Acknowledgments The E-Chalk system, an ongoing project at Freie Universität Berlin since 2001, was conceived and is supervised by Raúl Rojas. Several others have
1. G. Friedland et al., “E-Chalk: A Lecture Recording System Using the Chalkboard Metaphor,” Int’l J. Interactive Technology and Smart Education, vol. 1, no. 1, 2004, pp. 9-20. June 2005
63
2. R.S. Hall and H. Cervantes, “Challenges in Building Service-Oriented Applications for OSGi,” IEEE Comm. Magazine, May 2004, pp. 144-149. 3. H. Cervantes and R.S. Hall, “Autonomous Adaptation to Dynamic Availability Using a Service-Oriented Component Model,” Proc. Int’l Conf. Software Eng. (ICSE 2004), IEEE Press, May 2004, pp. 614-623. 4. G. Friedland et al., “The Virtual Technician: An Automatic Software Enhancer for Audio Recording in Lecture Halls,” Proc. 9th Int’l Conf. KnowledgeBased & Intelligent Information & Eng. Systems, LNCS, Springer, to appear Sept. 2005. 5. G. Friedland and K. Pauls, “Towards a DemandDriven, Autonomous Processing and Streaming Architecture,” Proc. 12th Int’l IEEE Conf. Eng. Computer-Based Systems (ECBS 2005), IEEE Press, 2005, pp. 473-480. 6. R.S. Hall and H. Cervantes, “An OSGi Implementation and Experience Report,” Proc. IEEE Consumer Comm. & Networking Conf. (CCNC 2004), IEEE Press, 2004, pp. 394-399. 7. K. Pauls and R.S. Hall, “Eureka—A Resource Discovery Service for Component Deployment,” Proc. 2nd Int’l Working Conf. Component Deployment (CD 2004), LNCS 3083, Springer, 2004, pp. 159174. 8. G. Friedland, L. Knipping, and R. Rojas, “E-Chalk Technical Description,” tech. report B-02-11, Dept. Computer Science, Freie Universität Berlin, 2002. 9. R. Rojas et al., “Teaching with an Intelligent Electronic Chalkboard,” Proc. ACM Multimedia 2004, Workshop on Effective Telepresence, ACM Press, 2004, pp. 16-23.
10. L. Knipping, “An Electronic Chalkboard for Classroom and Distance Teaching,” doctoral dissertation, Dept. Computer Science, Freie Universität Berlin, 2005. 11. G. Friedland, “Towards a Generic Cross-Platform Media Editor: An Editing Tool for E-Chalk,” Proc. Informatiktage, Gesellschaft für Informatik, Konradin Verlagsgruppe, 2002, pp. 230-234.
Gerald Friedland is a PhD candidate and researcher at the Center for Digital Media in the Department of Computer Science, Freie Universität Berlin. His research interests include intelligent multimedia technology for electronic learning. Friedland received an MS (Dipl.-Inform.) in computer science from Freie Universität Berlin. He is a member of the DIN-NI 36 committee that cooperates with ISO SC-36 in creating e-learning standards. Contact him at
[email protected]. Karl Pauls is a PhD candidate and researcher in Klaus-Peter Löhr’s Software Engineering and System Software Group, Department of Computer Science, Freie Universität Berlin. His research interests include security and access control in distributed systems, component-based software engineering, model-driven development, and distributed systems. Occasionally, he collaborates with Gerald Friedland to work on SOPA’s underpinnings. Pauls received an MS (Dipl.-Inform.) in computer science from Freie Universität Berlin. Contact him at
[email protected].
JOIN A THINK TANK Looking for a community targeted to your area of expertise? IEEE Computer Society Technical Committees explore a variety of computing niches and provide forums for dialogue among peers. These groups influence our standards development and offer leading conferences in their fields.
Join a community that targets your discipline. In our Technical Committees, you’re in good company. www.computer.org/TCsignup/ 64
Computer
PURPOSE The IEEE Computer Society is
PUBLICATIONS AND ACTIVITIES
the world’s largest association of computing professionals, and is the leading provider of technical information in the field.
Computer. The flaship publication of the IEEE Computer Society, Computer publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications.
MEMBERSHIP Members receive the
monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEB SITE
The IEEE Computer Society’s Web site, at www.computer.org, offers information and samples from the society’s publications and conferences, as well as a broad range of information about technical committees, standards, student activities, and more. OMBUDSMAN Members experiencing prob-
lems—magazine delivery, membership status, or unresolved complaints—may write to the ombudsman at the Publications Office or send an e-mail to
[email protected].
Periodicals. The society publishes 15 AVAILABLE INFORMATION
magazines and 14 research transactions. Refer to membership application or request information as noted at left.
To obtain more information on any of the following, contact the Publications Office:
Conference Proceedings, Tutorial Texts, Standards Documents.
• Membership applications • Publications catalog • Draft standards and order forms • Technical committee list • Technical committee application • Chapter start-up procedures • Student scholarship information • Volunteer leaders/staff directory • IEEE senior member grade application (requires 10 years practice and significant performance in five of those 10)
The IEEE Computer Society Conference Publishing Services publishes more than 175 titles every year. Standards Working Groups. More
than 150 groups produce IEEE standards used throughout the world. Technical Committees. TCs provide professional interaction in over 30 technical areas and directly influence computer engineering conferences and publications.
Conferences/Education. The society To check membership status or report a holds about 150 conferences each year change of address, call the IEEE toll-free and sponsors many educational activities, CHAPTERS Regular and student chapters number, +1 800 678 4333. Direct all other including computing science accreditation. worldwide provide the opportunity to interComputer Society-related questions to the act with colleagues, hear technical experts, Publications Office. and serve the local professional community. E X E C U T I V E C O M M I T T E E President: GERALD L. ENGEL* Computer Science & Engineering Univ. of Connecticut, Stamford 1 University Place Stamford, CT 06901-2315 Phone: +1 203 251 8431 Fax: +1 203 251 8592
[email protected] President-Elect: DEBORAH M. COOPER* Past President: CARL K. CHANG*
VP, Publications: MICHAEL R. WILLIAMS (1ST VP)* VP, Electronic Products and Services: JAMES W. MOORE (2ND VP)* VP, Chapters Activities: CHRISTINA M. SCHOBER* VP, Conferences and Tutorials: YERVANT ZORIAN† VP, Educational Activities: MURALI VARANASI†
BOARD OF GOVERNORS Term Expiring 2005: Oscar N. Garcia, Mark A. Grant, Michel Israel, Rohit Kapur, Stephen B. Seidman, Kathleen M. Swigger, Makoto Takizawa Term Expiring 2006: Mark Christensen, Alan Clements, Annie Combelles, Ann Q. Gates, James D. Isaak, Susan A. Mengel, Bill N. Schilit Term Expiring 2007: Jean M. Bacon, George V. Cybenko, Richard A. Kemmerer, Susan K. (Kathy) Land, Itaru Mimura, Brian M. O’Connell, Christina M. Schober Next Board Meeting: 4 Nov. 2005, Philadelphia
EXECUTIVE
STAFF
Executive Director: DAVID W. HENNAGE Assoc. Executive Director: ANNE MARIE KELLY Publisher: ANGELA BURGESS Assistant Publisher: DICK PRICE Director, Administration: VIOLET S. DOAN Director, Information Technology & Services: ROBERT G. CARE Director, Business & Product Development: PETER TURNER
VP, Standards Activities: SUSAN K. (KATHY) LAND*
2005–2006 IEEE Division VIII Director: STEPHEN L. DIAMOND† 2005 IEEE Division V Director-Elect: OSCAR N. GARCIA*
VP, Technical Activities: STEPHANIE M. WHITE† Secretary: STEPHEN B. SEIDMAN*
Computer Editor in Chief: DORIS L. CARVER†
Treasurer: RANGACHAR KASTURI†
Executive Director: DAVID W. HENNAGE†
2004–2005 IEEE Division V Director: GENE F. HOFFNAGLE†
COMPUTER SOCIETY O F F I C E S Headquarters Office 1730 Massachusetts Ave. NW Washington, DC 20036-1992 Phone: +1 202 371 0101 • Fax: +1 202 728 9614 E-mail:
[email protected] Publications Office 10662 Los Vaqueros Cir., PO Box 3014 Los Alamitos, CA 90720-1314 Phone:+1 714 821 8380 E-mail:
[email protected] Membership and Publication Orders: Phone: +1 800 272 6657 Fax: +1 714 821 4641 E-mail:
[email protected] Asia/Pacific Office Watanabe Building 1-4-2 Minami-Aoyama,Minato-ku, Tokyo107-0062, Japan Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 E-mail:
[email protected]
* voting member of the Board of Governors † nonvoting member of the Board of Governors
IEEE
OFFICERS
President and CEO : W. CLEON ANDERSON President-Elect: MICHAEL R. LIGHTNER Past President: ARTHUR W. WINSTON Executive Director: TBD Secretary: MOHAMED EL-HAWARY Treasurer: JOSEPH V. LILLIE VP, Educational Activities: MOSHE KAM VP, Publication Services and Products: LEAH H. JAMIESON VP, Regional Activities: MARC T. APTER VP, Standards Association: JAMES T. CARLO VP, Technical Activities: RALPH W. WYNDRUM JR. IEEE Division V Director: GENE F. HOFFNAGLE IEEE Division VIII Director: STEPHEN L. DIAMOND President, IEEE-USA: GERARD A. ALPHONSE
COVER FEATURE
Toward an Electronic Marketplace for Higher Education A case study of a MERLOT-IBM prototype shows the feasibility of a framework for the secure creation, management, and delivery of digital content for education and offers insights into service requirements.
Magda Mourad
T
IBM T.J. Watson Research Center
Gerard L. Hanley MERLOT
Barbra Bied Sperling Center for Usability in Design and Assessment
Jack Gunther IBM Education Industry Solutions
66
he discovery, evaluation, and acquisition of information form a process that is key to higher education. How successfully faculty, students, librarians, staff, and administrators work through this process depends on their ability to handle a variety of information and the information providers’ skill in providing the means for doing so. A simple example is acquiring content from books. Students must buy or check out books and read them as their part of content acquisition. The university, in turn, must make it easy for students to acquire the books through its bookstore and library. Thus, these two university businesses provide a controlled, reliable, and profitable marketplace, one that publishers have relied on for years. The availability of digital content has created opportunities for universities and publishers to improve the marketplace. Google lets users search a world of information in a second, and Amazon promotes a user community by publishing users’ product evaluations and recommendations. These models motivate an education electronic marketplace (ede-marketplace) as well as reshape how academic institutions view information discovery, evaluation, and acquisition. But an ede-marketplace is not without risk, as the “Critical Development Issues” sidebar describes. Although such a marketplace is technically feasible, practical deployment issues still hinder the growth of online content sales, relative to the online market for physical goods. One obstacle is the lack of standard techniques for building a trusted digital
Computer
rights management (DRM) system that can determine, record, transmit, interpret, and enforce digital rights. The focus has been on perfecting DRM and content-protection schemes, but the real challenge is how to design end-to-end systems that integrate with the changing needs of society and a dynamic business world. As a first step toward designing such a system, the Multimedia Educational Resource for Learning and Online Teaching (MERLOT, www.merlot.org) and IBM (www.watson.ibm.com) collaborated in examining problems and possible solutions. The collaboration culminated in a framework that IBM designers used to build a prototype ede-marketplace, which IBM and MERLOT evaluated in a field test. The test indicated that this framework offers a trusted environment that is suitable for use by publishers and users for the secure creation, management, and delivery of digital content. Testing an end-to-end system in real situations also helped us understand business rules, user roles, and services that the system must provide for each user role.
COLLABORATIVE CONSTRUCTION More often than not, technology companies, publishers, and students and faculty in higher education institutions do not collaborate because customer and vendor roles often hinder true partnerships in an interdependent market. The MERLOTIBM collaboration is uniquely effective because it joins a team that develops technology with those who will use it. The collaboration’s goals, which emphasized this partnership,1 were to design a tech-
Published by the IEEE Computer Society
0018-9162/05/$20.00 © 2005 IEEE
Critical Development Issues
nology service that would provide more value to customers and establish a cost-effective and timely path to developing that service. As a consortium of more than 500 higher education campuses managed by 16 state systems and seven individual institutions, MERLOT is in a key position to articulate, validate, and verify an edemarketplace’s requirements. It also has partnerships with other digital libraries worldwide, such as Globe (http://taste.merlot.org/documents/articles/globe_ press_release.pdf). MERLOT manages a digital library with roughly 12,000 online teaching-learning materials along with a variety of services, including peer review, professional development, and federated search. In 2004, the consortium had some 200,000 unique users and 23,000 registered members of faculty, staff, students, librarians, administrators, and other members of the education community. MERLOT’s user community and collection of resources meant that we could add commercial materials into an active and growing digital exchange rather than build an ede-marketplace from scratch. For its part in the collaboration, IBM provided a generic model of its Electronic Media Management System (www.ibm.com/software/data/emms). The EMMS, along with other technologies, provides an open architecture that offers flexible services for managing e-learning content. With these tools and services, content owners and distributors can securely publish digital content on the Web and reliably protect the associated intellectual property rights in an open network environment.
Everyone can be a publisher or content supplier. Universities, faculty, and students are publishing materials on the Web that provide complementary resources to published materials. In the expression and execution of intellectual property for digital content, privacy and security of patron identity are difficult to control reliably. Not only are the publishers at significant risk, but according to the Teach Act (http://www.lib.ncsu.edu/ scc/legislative/teachkit/), universities face significant liabilities if they do not protect the digital rights of publishers within their online courses. Users could experience a comfort-zone shift. The Web has become a powerful alternative business to the library and bookstore, both of which are experiencing decreased traffic into their brick-and-mortar business. Nonetheless, students and faculty find the physical appearance and business conventions of these establishments more familiar and consistent than the varied interfaces and conventions of Web-based resources. Could this comfort level compromise the e-marketplace’s usability? New business models required. The digital medium could dramatically reduce the operational costs of producing, selling, and distributing educational content. Saving libraries and students money in buying educational content is a major business imperative for universities. Reducing the costs of business operations for publishers and thereby increasing their profit margin is another significant business imperative. There will be new business operation costs in the ede-marketplace, such as for packaging the digital content and developing tools to manage digital inventory. In addition, protection might not withstand pirating attacks, user acceptance could fall short of the required level, and interoperability issues could hinder the development of such a market. Publishers, higher education institutions, and technology companies must validate a variety of assumptions for these new business models if the edemarketplace idea is to bear fruit.
FOUNDATIONAL TECHNOLOGY Figure 1 shows the operational flow in the EMMS, one of the earliest systems for securely distributing digital content. Within a secure content-distribution system, publishers sell their content to users through
Content publisher (1)
✍
✍ Clearinghouse (5) Content hosting facility (4)
Web retailer (2)
End-user application (3)
Figure 1. Information flow in IBM’s Electronic Media Management System. (1) The content publisher distributes content; (2) the user browses and buys content from a Web retailer; (3) the end-user application receives a key to decipher content; (4) the application receives the encrypted content; and (5) the clearinghouse handles the billing for the content royalties. June 2005
67
intermediaries (retailers) on the Web. The clearinghouse establishes a focal point for A learning content authorization and tracks content use. It is the management entity that the content owner, Web retailer, and system provides user can trust. The content-hosting facility ensures that users anywhere receive proper a secure delivery service without retailers having to virtual learning store bulky content in their servers and forenvironment for ward it directly to their customers. teams of people IBM initially deployed this scheme for engaged in a Internet media distribution but extended it to support broadcast networks, which lets common e-learning parties defer account reconciliation. The sysactivity. tem also runs on mobile-phone networks and has commercially distributed music, video clips, and e-books. As Figure 1 shows, the EMMS has five main operational steps: (1) The content publisher distributes the encrypted content and promotional material. A networkwide hosting facility stores the content, which the publisher can globally distribute or replicate as appropriate. (2) The user browses and buys content from a Web retailer. After payment, the retailer sends a receipt to the user application so that the user can prove the purchase of the rights the receipt specifies. (3) The end-user application automatically requests and receives a license (key) that it can use to decipher the content. (4) The user application automatically requests and receives the encrypted content from the content-hosting facility. The application immediately decrypts the content and reencrypts it using the user’s private key, and a local application that manages the user’s acquired media assets stores the reencrypted content. Hacking or tampering with this application software module is extremely difficult. (5) Finally, the clearinghouse consolidates and reconciles all the information needed to ensure proper billing for the content’s IPR royalties. The basic principle of this approach is to establish mechanisms for securing content against unauthorized use. To ensure end-to-end security in the system, we used system administration mechanisms for access control, techniques for assuring content integrity, and encryption algorithms with sufficient computational complexity and robustness to withstand attacks. We also watermarked content to identify its origin in case of pirating. 68
Computer
E-LEARNING ENHANCEMENTS To address the unique requirements of handling digital content in education,2 we created several DRM design and implementation extensions to the EMMS and end-to-end security controls. Driving these extensions is the need to conform to emerging e-learning standards in all three content lifecycle phases: authoring and packaging, management and distribution, and consumption. Standards and standards-oriented efforts include the Shareable Content Object Reference Model (SCORM),3 the IEEE Learning Object Metadata (LOM),4 the IMS Global Learning Consortium’s Content Packaging Information Model,5 and the ongoing effort within the IEEE to identify the functional and technical requirements of a Digital Rights Expression Language (http://ltsc.ieee.org/wg4/). One of our extensions was to create a learning content management system (LCMS), which provides a secure virtual learning environment for teams of people engaged in a common e-learning activity through its support of importing, designing, developing, managing, exporting, and reusing learning objects and courseware. The idea behind the LCMS is to let educational institutions outsource customized educational services without having to purchase and maintain the infrastructure to run the applications. The LCMS accommodates a range of participants: The service provider is the entity hosting the system, and its administrator provides access and services to all the subscribers. Subscribing organizations are universities and other producers and consumers of learning content, which could be at several remote sites. They might also be content providers. Third-party vendors who specialize in e-learning content creation and publishing might also provide content. As Figure 2 shows, the LCMS has five basic subsystems (shaded modules) for content protection: DRM content packager, e-store, content hosting repository, DRM license server (clearinghouse), and DRM client browser. An LDAP server provides a mechanism for single sign-on to the service provisioning manager and all the hosted applications and also enables user profiling and account management. With the LCMS, educational institutions can outsource customized educational services without having to buy and keep up an infrastructure to run applications.
Content packager Content owners can use the DRM content packager to encrypt and package digital content. Content packages must be SCORM-compliant3 to
Content manager Services provisioning manager
SCORMcompliant DRM extensions Formatspecific DRM plug-ins
Internet
Client browser
Digital rights management content packager
• Common UI • System administration • User management • Services and applications • Hosting • Single secure sign-on
• • • •
Content manager loader
E-store Search tools Catalog creation Content purchase Reporting
DRM license server (Clearinghouse)
Learning objects
SCORM metadata
Content delivery
Database
Content management tools
Learning management system Authentication and access control (LDAP)
ensure interoperability. SCORM provides standard specifications for describing and packaging Webbased learning content. The SCORM content aggregation model represents a pedagogically neutral means for instruction designers and implementers to aggregate learning resources that will aid in delivering a desired learning experience. The model5 comprises • the SCORM content model, a nomenclature that defines the content components of a learning experience; • SCORM metadata, a mechanism for describing specific instances of the content model’s components; and • content packaging, which defines how to represent the intended behavior of a learning experience (structure) and how to package learning resources for movement between different environments (packaging).
Content-hosting repository
tively. It uses a digital rights expression language to extend the SCORM-compliant metadata to include an expression of complex digital rights. This extension provides the basic requirements to enforce the use rights policy over the learning resource.
E-store This module promotes the content and grants use rights to individual users in the form of digital certificates (or licenses). The e-store provides users with Web-based interfaces to tools for searching learning content and providing feedback to the content authors. It also generates detailed reports on downloads and purchasing transactions associated with each piece of content. Technically, the e-store can track who receives each content piece so that the provider can notify them when the content changes, but, clearly, privacy is a concern.
Content-hosting repository Content packaging includes a manifest file that describes the package itself and contains the metadata about the package, an optional organization section that defines content structure and behavior, and a list of references to the resources in the package. It also includes directions for how to create an XML-based manifest and for packaging the manifest and all related physical files. Through Web-based interfaces, content owners specify the set of use rights they will permit and provide marketing information such as price and promotional material. The DRM content packager handles the ingestion of the learning content, its metadata, and associated digital rights, as well as the promotional material for the catalog into the content hosting repositories and e-store, respec-
The repository, which stores content, comprises three main modules. The content manager loader receives requests from the DRM content packager to update, insert, or delete content packages. The content manager warehouses the packages and its constituent files and handles the associated digital rights files. Finally, the content delivery module delivers the purchased content to the learning management system (LMS) and its associated rights to the DRM license server (clearinghouse). It could also deliver the purchased content directly to the client machine if the user is taking the lesson offline. By logically separating the content-hosting repository from the e-store, the LCMS offers flexibility and independence in the choice of distribution channel. The LMS provides learning management,
Figure 2. Architecture of a prototype learning content management system. Through five basic subsystems, the LCMS provides a secure virtual learning environment for importing, designing, developing, managing, exporting, and reusing learning objects and courseware. Shaded modules are digital rights management (DRM) extensions to the original LCMS.
June 2005
69
A DRM-enabled client processes the protected content during content consumption.
content delivery, progress tracking and assessment, and synchronous and asynchronous Web-based collaboration among LMS users.
quakes. He believes the application will benefit a broad cross section of students, but he sees significant barriers to the extensive use of his work. Problems. As Bob takes the steps to distribute his work on a larger scale, he has many questions:
License server The DRM license server delivers content decryption keys to authorized users, handles authorization, and tracks content use. In general, it is the only component that all parties (content owners, distributors, and consumers) can trust. The server receives a public key from a user who registers for the first time and uses the key to encrypt rights when sending content to that user. To facilitate auditing, the server logs the purchase transactions that relate to protected content.
Client browser Authors and students use a DRM-enabled client to process the protected content during content consumption. When users log onto the system for the first time, they download a software extension that lets the browser render protected content6 in a tamper-resistant environment. This extension lets the client browser decrypt content in real time and display it, controlling the browser menu to allow only permitted operations (such as read but not copy).
FIELD TEST PLANNING The ede-marketplace field test had two goals. The first was to test the ede-marketplace’s viability and, with a focus on DRM services, develop recommendations for improving it. The second was to build an awareness of the ede-marketplace’s capabilities, thus creating demand for its services. A critical part of test planning was to develop use cases that would promote a shared understanding of the services and priorities that the ede-marketplace should provide. We developed use cases for the individual faculty as content provider, commercial publisher as content provider, and faculty and students as customers. These use cases helped us choose the functions we wanted to develop and test.
• Will my application be reliably available? I wonder if my department’s server and my institution’s technical infrastructure and network will be able to carry the load of my high-bandwidth application. • Will I be able to manage the digital rights to my application? I’m willing to provide free access to students and educators, but I want to be sure that they comply with fair-use policies and that I get recognition for my work. • How long should I allow free distribution and use? Could I offer it free today and then maybe sell this version or upgrades of it next year? • Have I offered users enough understandable information about how to use my application? I don’t want an avalanche of e-mail asking questions, but I would like some feedback on bugs, suggested improvements, and general effectiveness. • How do I sell enhanced versions of my application? I’m not sure how to manage the whole business process, such as how to sell the application legitimately and efficiently. • How do I promote my application? I know a lot about the application per se, but I know very little about how to market it, and I don’t have time to figure it out. • I know standards and interoperability are positive features, but how do I exploit them in my application? • How can I let users know about upgrades and different versions? I don’t want to have to manage versions to the extent that I become a publisher. • How do I know how many and what kind of people are using my application? I might want to improve it, and information like this would be handy. I know campus IT service is overwhelmed, so asking for their support in determining a user profile is probably unrealistic.
Identifying services We began with the use case in which a faculty member provides content, which helped us shape the requirements of both content providers and customers that might use the ede-marketplace. Professor Bob has spent more than two years developing and testing a simulation for teaching introductory and advanced students about earth70
Computer
Solution scenario. Bob has heard about MERLOT and its services to support faculty who want to distribute and sell applications in ways that conform to the academic culture. He goes to the MERLOT Web site, clicks on the link to MERLOT’s services for faculty authors of academic technologies, and reads through the guidelines for MERLOT author
services. He learns that MERLOT can provide what he needs and that he will not have to figure out what to do. He decides to give it a try. • Bob registers as a MERLOT author member, using a simple sign-up procedure that requires only an Internet browser. He selects a set of system requirements that seem compatible with his system. • Using the site’s content-preparation tools, Bob describes his application’s content and potential use so that he and the MERLOT services can properly manage it. The tools create a sharable content object (SCO), which he can store on his PC or in the MERLOT repository services. • Because his campus is a MERLOT institutional partner, Bob opts to use MERLOT’s repository services, so he logs into the MERLOT author services site and follows the simple steps to upload his SCO application. The site then prompts him to define the digital rights he wants applied and the promotional information he’d like to use. He also reviews the royalty policy and chooses among a variety of royalty options. • After spending about an hour completing these steps, Bob explores MERLOT’s author services and sees that he has many options for updating his application’s description, promotion, and digital rights. He feels more comfortable that he can experiment with different options to see what works best. • Bob is using his application in his own course, so he adds links to his course management system (WebCT or Blackboard, for example). He asks students to submit their virtual lab report, which his application generates directly to him within the course management system. This flexibility is a significant asset because it lets students work on the lab whenever and wherever they are and submit the lab report to Bob wherever he is. • A month after Bob puts his material into MERLOT, he signs into MERLOT’s author services and requests a report on his application’s use. He’s surprised to find that many people are using it because he had not received an overwhelming amount of e-mail about the application details. He decides to add the MERLOT report to his tenure and promotion portfolio as evidence of his work’s quality and value. • A year after he puts his material into MERLOT, Bob requests another report on the application’s use and finds that 1,000 users access it daily.
He’s pleased that MERLOT’s bandwidth limits are not constraining use and that if he needs more bandwidth, he can add it without moving to another hosting environment.
Two test phases From the faculty author use case, IBM and MERLOT identified two cornerstone ede-marketplace functions that the field test should evaluate: the ease of shopping and the ability to stock the shelves—that is, the appeal to publishers of the services the ede-marketplace offers. In phase 1, we evaluated the ede-marketplace’s performance when faculty, staff, and administrators search, find, acquire, and use resources from their own computing environment. For this phase, resources consisted of MERLOTauthored content reserved for its partners (the MERLOT Reserve collection), with access restricted to MERLOT-authenticated individuals, who could play, print, and download content. In phase 2, we looked at three main functions of interest to publishers:
The field test evaluated two cornerstone functions: the ease of shopping and the ability to stock the shelves.
• uploading, hosting, and defining use conditions for content that publishers can distribute in a controlled environment; • receiving and evaluating the reporting services on learning object use by customers (faculty and students); and • distributing controlled-access content packages through their local course-management system. Phase 2 had several major steps. First, the IBM team members worked with a publisher to create SCORM-compliant packages for the educational content the publisher selected for the test. The publisher then uploaded the content packages to an IBM-hosted environment, defined two use conditions (we limited conditions for simplicity), managed different versions of the content packages, and evaluated the performance and value of IBM services. Faculty and students then searched for, selected, downloaded, and used the online materials and evaluated the performance and value of the IBM services. Finally, IBM provided the publisher with a report about material use.
FIELD TEST 1: SHOPPING Phase 1 test participants were members of MERLOT’s leadership councils, which comprise sysJune 2005
71
tem-level academic technology administrators and faculty across a variety of disciplines. All 24 participants had extensive experience using MERLOT and online resources. They used a range of PCs protected by a variety of institutional firewalls. All of the PCs had Windows XP or 2000. The prototype edemarketplace does not support the Macintosh operating system.
The field test highlighted the need to develop reliable protocols for downloading and installing separate client application software.
Procedure
MERLOT staff uploaded MERLOT Reserve content into the IBM system, defined two use conditions through DRM services, and managed the content after initial uploading. MERLOT provided the MERLOT Reserve catalog and search/ browse services to interface with IBM’s hosting of MERLOT Reserve content. Participants accessed content and evaluated services by first registering in the IBM system and downloading a client—software they had to install locally. The client let the computer use the content packages acquired through the IBM system. After participants installed and tested the client, they searched the MERLOT Reserve, selected three materials to download, and opened and used the selected material. Participants deliberately attempted to violate the use conditions. For example, if a condition was to display the material only, we asked them to try to print it. After completing the procedure to the extent possible, participants filled out an online survey centered on four questions: • Is the system effective in providing access to content packages in accordance with stated use conditions? • Is the system easy to learn? • Is the system easy to use? • Do you like using the system?
Findings Phase 1 testing provided several key insights, including verification that no one could violate the use conditions for the content packages. Many of these insights were problems that would hinder the academic community’s adoption of the prototype ede-marketplace. Fortunately, we are in the process of fixing most of them. Implementation problems. Most universities have firewalls to protect their networks and systems. Participants were using university computers, and the firewall prevented them from uploading content, downloading the client, obtaining licenses and content, and so on. During the testing, we had 72
Computer
to solve this problem by using asynchronous Web services protocols instead of plain synchronous TCP/IP. The field test highlighted the need to develop reliable protocols for downloading and installing separate client application software. In a prototype upgrade, we changed the design and provided the client code as a signed applet, which also lets us upgrade the client more flexibly. The test also showed some incompatibility between the DRM client and some versions of the software required for viewing the materials. This last problem points out the need to effectively manage interoperability on an ongoing basis. Usability issues. The pilot report indicated that the system added several layers of complexity to accessing content. This complexity was the result of our focus on technology rather than on user comfort. We identified two main problems. First, users found it very difficult to navigate through three programs (e-content catalog, client, and LMS) to view one piece of content. IBM has since modified the system to use portal technology to improve ease of navigation. The enhanced navigation still needs testing to verify the improved ease of use. Second, users cited various inconvenient user interface features, such as a limited ability to resize certain windows for easier horizontal scrolling and some unclear warning messages. We did not see this as a major obstacle, however, because our design includes APIs between the system’s core components and the user interface layer. Therefore, we can implement a variety of user interfaces, possibly even tailored to an institution or a user group. Conditional use limitations. Users felt that basing DRM controls on counters (number of accesses) was inconvenient and conflicted with higher education use needs. They preferred time-based restrictions, such as making the content available for an academic semester. We have implemented this change in the new prototype upgrade. Developing models for academically acceptable use conditions and viable business models for use conditions is a critical issue requiring further development. Many participants also expressed the need to extend the system to support and protect more file formats than the HTML and PDF we used in the field test. We are working on extending the system support for more file formats.
FIELD TEST 2: STOCKING THE SHELVES The lessons learned from Phase 1 led us to run Phase 2 within the controlled environment of a
usability laboratory, the Center for Usability in Design and Assessment (www.csulb.edu/cuda) at California State University, Long Beach. IBM had worked closely with a publisher to create and upload SCORM-compliant content packages, which faculty and students then searched for, found, and evaluated. The publisher then evaluated the usability and value of the ede-marketplace’s services. Three students and four faculty members participated in this phase, all of whom had at least a year’s experience using Blackboard. Six participants were psychology faculty or students whose discipline aligned with the publisher’s materials. The seventh participant was a computer science and information systems faculty member. All participants used a Pentium II with Windows XP, Internet Explorer 6.0, and Adobe Acrobat Reader 5.1.
Procedure When the participants arrived, we gave them a general description of their tasks and told them we would be observing them perform the tasks for a 90minute session. We asked participants to think out loud as they completed their tasks, and a CUDA staff member behind a one-way mirror prompted them to do so by asking about their performance and decisions. Participants provided continuous running commentary to the best of their ability about what they were doing, seeing, and thinking as well as what they liked, disliked, and expected. We also encouraged them to suggest system improvements. Faculty participants received these contextsetting instructions before they began the test: It is nearing the end of summer and you need to put together a course pack on Introduction to Psychology for your students in class Psychology 105. Use the provided instruction guide and the system to search and allocate content, and put the allocated content into the student course management system.
Student participants received these contextsetting instructions before they began the test: It is nearing the end of the semester and you missed a couple of classes where crucial topics were discussed that you need to know for the final. You need to catch up and answer the assigned questions. To do this you will need to access the online course and assignment.
After completing the search and purchase tasks, participants viewed the material online. They then
provided feedback about accessing content and the content’s acceptability for an online course. CUDA staff videotaped the entire session and analyzed the participants’ responses and behaviors. At the test’s end, each participant completed a questionnaire that aimed to elicit additional information on DRM from an individual perspective.
Findings
Our overarching goal is to develop a strategy that delivers digital content costeffectively and with accountability.
We had to overcome several barriers in creating SCORM-compliant content packages and reliably uploading the packages for the publishers. One challenge was how to transform all the original content into PDF or HTML, the two formats the system could protect in this test. We thought the next phase would be easy and reliable because the publisher’s content was supposed to be SCORMcompliant. However, we found that the specification of some SCORM fields was ambiguous. For example, a SCORM manifest is not deemed invalid if it contains proprietary extensions,3 which resulted in some packages being declared as SCORMcompliant when they are not. Thus, the ede-marketplace had problems processing them. To solve this interoperability problem, we had to create a special tool to break the packages into smaller learning objects and then generate the manifest files and the metadata files in a way that is compatible with our SCORM implementation. The only long-term solution at this stage is for authors and publishers to ensure that they limit certain parameters to specific values when they develop content for the ede-marketplace or use special tools available through the ede-marketplace. Improving the specification of the SCORM metadata standards will be important to the ede-marketplace’s usability. Faculty and students were very positive about the possibility of an ede-marketplace. They did highlight significant navigation and user interface challenges, asking the test monitor for considerable assistance to complete their tasks. Participants consistently reported the need for a seamless and logical sequence of steps to find, select, and acquire materials.
EDE-MARKETPLACE SERVICES Both test phases validated and expanded IBM’s and MERLOT’s assumptions and requirements for a successful ede-marketplace. Both faculty and publishers, and their consumers, listed six must-have features: June 2005
73
Table 1. Services that one ede-marketplace model might offer. Publishing services for content providers
Teaching services for faculty
Learning services for students
Package content using SCORM Reliably host the content
Find and gain access to online teaching and learning materials
Define and control a wide range of use conditions for various academic constituents Define and control use conditions that reflect a variety of business models publishers can implement to achieve competitive advantage
Determine the quality and utility of the materials for their students
Find and use the course curriculum to achieve learning objectives Communicate within the learning environment
Construct course curriculum with online content and create personal collections of materials
Create and submit their assignments, course projects, and homework for evaluation
Share online materials and pedagogy with their peers
Receive feedback about their learning performance
• reliable hosting, where users control access and availability; • user-friendly interfaces that provide seamless and logical services; • efficient version control protocols within a digital library; • effective encryption and distribution protocols with DRM; • easy interoperability for delivery to desktops and LMSs; and • trustworthy reporting protocols to report use Table 1 lists a potential ede-marketplace model that provides four types of services. esting an end-to-end system in real situations enabled us to understand the business rules, the different roles of the system users, and a well-defined set of services that must be provided for each system user role. The insights we obtained during the field tests will help all the ede-marketplace stakeholders make proper compromises when they are developing cost-effective systems that provide useful services. Our overarching goal is to develop a strategy that simultaneously achieves the goals of technology companies, publishers, and higher education to deliver digital content cost-effectively and with accountability. To meet that goal, we must further assess and field-test some of the creative proposals that could affect the academic publishing and higher education business landscape, turning lessons learned into lessons practiced. ■
T
74
Computer
Reporting services for all Design and distribute customized reports on content use and system performance
Acknowledgments We thank Yael Ravin and Ahmed Tantawy for their numerous valuable suggestions throughout this project and during the initial manuscript writing. Special thanks go to the system development team in the IBM Cairo Technology Development Center for their dedication and creativity and to the MERLOT team members who participated in this effort, provided valuable support and feedback, and endured the usual complications of working with an evolving prototype.
References 1. G.L. Hanley, “Serving MERLOT with IBM: Researching Enabling DRM Technologies for Higher Education,” Proc. Ann. Educause Conf., Educause, 2003; http://www.educause.edu/EDUCAUSE2003/ 1332. 2. N. Friesen, M. Mourad, and R. Robson, “Towards a Digital Rights Expression Language Standard for Learning Technology,” 2003; http://xml.coverpages. org/DREL-DraftREL.pdf. 3. Advanced Distributed Learning, “SCORM Overview,” 2003; http://www.adlnet.org/index.cfm? fuseaction=scormabt. 4. IEEE Standard for Learning Object Metadata, IEEE Press, 2002; http://shop.ieee.org/store/. 5. “IMS Content Packaging Information Model,” v1.1.4 Final Specification,” 2004; http://www. imsproject.org/content/packaging/. 6. M. Mourad et al., “WebGuard: A System for Web Content Protection,” IBM Research Report RC21944, Nov. 2000.
Magda Mourad is CTO of IBM’s Digital Media business unit and a research staff member at IBM T.J. Watson Research Center. Her research interests include digital media utilities and hosting services, digital rights management for secure content distribution, and e-learning and training systems based both on the Internet and digital broadcast networks. Mourad received a PhD in computer engineering from École des Mines de Nancy, France. She is a senior member of the IEEE and chairs the Digital Rights Expression Language working group of the IEEE Learning Technologies Standards Committee. Contact her at
[email protected]. Gerard L. Hanley is the executive director of MERLOT and senior director for academic technology services for California State University, Office of the Chancellor. At MERLOT, he directs the development, delivery, and sustainability of MERLOT’s organization and services to enhance teaching and learning with academic technologies. At CSU, he
oversees the development and implementation of integrated electronic library resources and academic technology initiatives supporting CSU’s 23 campuses. He is also the director of the Center for Usability in Design and Assessment at CSU, Long Beach. Hanley received a PhD in psychology from the State University of New York at Stony Brook. Contact him at
[email protected]. Barbra Bied Sperling is a usability specialist at the Center for Usability in Design and Assessment at the California State University, Long Beach, and the MERLOT Webmaster. Contact her at barb@ merlot.org. Jack Gunther is a market development executive with IBM, where he is responsible for developing relationships with companies and organizations focused on serving the higher education marketplace. He received a BA in mathematics from Occidental College. Contact him at
[email protected].
SPECIAL
OFFER
FROM
IEEE PERVASIVE COMPUTING IEEE Pervasive Computing is the premier publishing forum for peer-reviewed articles, industry news, surveys and tutorials on mobile and ubiquitous computing. See for yourself with a free copy of our Successful Aging issue! Read peer-reviewed articles on
TRY A COPY FOR
FREE!
Pervasive computing computing research research on on healthy healthy aging aging • Pervasive A prototype pervasive computing infrastructure that prototype pervasive computing infrastructure that • Alets lets applications and services reason about uncertainty physically inspired inspired approach approach to to motion motion coordination coordination • AA physically You’ll also find departments on Evaluating ubicomp ubicomp applications applications • Evaluating Reducing the the power power demands demands of of mobile mobile devices devices • Reducing Exploring the Grid’s potential • Exploring the Grid’s potential
For a limited time only, download your FREE issue at
www.computer.org/pervasive/freeissue.htm June 2005
75
Don't Put It Off Any Longer STRENGTHEN YOUR PROFESSIONAL QUALIFICATIONS TODAY New Testing Centers Now Open The IEEE Computer Society's Certified Software Development Professional Program recently added 51 new testing centers in 32 new countries in Europe, Asia, and Latin America, in addition to testing centers throughout the US and Canada. With so many testing centers, it's more convenient than ever for software engineers to take the exam and earn the CSDP credential. ASIA
EUROPE
CHINA Beijing Shanghai
ARMENIA Yerevan
INDIA Ahmedabad Allahabad Bangalore Calcutta Chennai Hyderabad Mumbai Delhi JAPAN Tokyo
BULGARIA Sofia CROATIA Zagreb FINLAND Helsinki FRANCE Paris Toulouse GEORGIA Tbilisi
GERMANY Berlin Frankfurt Hamburg Munich GREECE Athens Thessaloniki HUNGARY Budapest ISRAEL Tel Aviv ITALY Milan
CSDP TESTING CENTERS SWITZERLAND KAZAKHSTAN Geneva Alma-Ata LITHUANIA Vilnius NETHERLANDS Arnhem PORTUGAL Lisbon ROMANIA Bucharest RUSSIA Moscow St. Petersburg
TURKEY Ankara Istanbul Izmir UKRAINE Kiev
BOLIVIA La Paz
GUATEMALA Guatemala City
BRAZIL Belo Horizonte Brasilia Curitiba Puerto Alegre Recife Rio de Janeiro Sao Paulo
MEXICO Guadalajara Mexico City Monterrey
UNITED KINGDOM CHILE London Twickenham Santiago UZBEKISTAN Tashkent
COLOMBIA Bogota Cali
PANAMA Panama City PERU Lima VENUZUELA Caracas
LATIN AMERICA IRELAND Dublin
SPAIN Barcelona Madrid
ARGENTINA Buenos Aires
DOMINICAN REPUBLIC Santo Domingo
For more information, visit www.computer.org/certification or email
[email protected]. Fall Testing Window: 1 September - 30 November 2005 Application Deadline: 1 September.
COVER FEATURE
Stiquito for Robotics and Embedded Systems Education Stiquito, a small robotic insect, has been used to introduce students to robotics. A new version features a preprogrammed microcontroller board that students can use to learn about the concepts of robotics and embedded systems.
James M. Conrad University of North CarolinaCharlotte
0018-9162/05/$20.00 © 2005 IEEE
R
obotics means different things to different people. Many conjure up images of Star Wars’ R2D2 or C3PO moving about autonomously and conversing with others in any environment. Few think of unattended vehicles or manufacturing devices, yet robots are predominantly used in these areas. A robot is any electromechanical device that receives a set of instructions from humans and repeatedly carries them out until instructed to stop. Based on this definition, building and programming a toy car to follow a strip of black tape on the floor is an example of a robotic device, but building and driving a radio-controlled toy car is not. Stiquito, a small hexapod robot, has been used for years in education. To walk, the robot’s legs use “muscles” made of Flexinol. Originally designed as a manually controlled device, the operator pressed two switches attached via a tether to control the six legs in sets of three. Most Stiquito robots built so far have been educational novelties because they could not be controlled by a programmable computer. This lack of a readily available controller hampered the efforts of academic researchers, who wanted to use it to study colony robotics and emergent systems. Another obstacle has been the robot’s original low cost and the consequent need to design and implement a reliable controller that is equally inexpensive and low-power. Stiquito Controlled is a new self-controlled robot that uses a microcontroller to produce forward motion by coordinating the operation of its legs. Although the controller is sold pro-
grammed, educators and researchers can reprogram the board to examine other areas of robotics. In fact, the board can be used alone to learn embedded systems development concepts.
HISTORY OF STIQUITO In the early 1990s, Jonathan Mills of Indiana University was looking for a robotic platform to test his research on analog logic. Most platforms available at that time were prohibitively expensive, especially for a young assistant professor with limited research money. Thus, Mills set out to design his own inexpensive robot. He chose four basic materials on which to base his design: • For propulsion, he selected nitinol (specifically, Flexinol from Dynalloy Inc.),1 a material that would provide a “muscle-like” reaction for the circuitry and would closely mimic biological actions. • For counterforce to the Flexinol, Mills selected music (spring) wire. The wire could serve as a force to stretch the Flexinol back to its original length and provide support for the robot. • For the robot’s body, he selected 1/8-inchsquare plastic rod; for leg support, body support, and attachment of the Flexinol to the plastic, Mills chose aluminum tubing. Mills experimented with various designs, ranging from a tiny two-inch-long four-legged robot to one that was four inches long and had floppy legs. This experimentation revealed that the robot’s best
Published by the IEEE Computer Society
June 2005
77
Stiquito for their science fair projects, and thousands of hobbyists use Stiquito to dabble in robotics. Some specific examples of uses of Stiquito include the following:
Figure 1. Stiquito ambulation. The robot walks by alternately activating the two tripods.
movement was realized when the Flexinol was parallel to the ground and the leg was perpendicular when it touched the ground. The Stiquito robot’s six legs are divided into two tripods, with each tripod including two legs on one side of the robot, and one on the other.2,3 This division provides a smoother motion without the complexity of having to control each leg individually. As Figure 1 shows, the original Stiquito walks by alternately activating each of the two tripods. While the first tripod is being activated, the second is in a “relaxed” state in which it is returning to the position in which its legs are perpendicular to the body. The second tripod is then activated, and the first is allowed to relax. Stiquito was originally designed for only one degree of freedom. Two years later, Mills designed a larger version, Stiquito II, which had two degrees of freedom.2 As Figure 2 shows, two degrees of freedom means that Flexinol wire is used to pull the legs back (first degree) as well as to raise them (second degree).
EDUCATIONAL USES OF STIQUITO Over the years, high school, community college, and university faculty have used Stiquito to educate tomorrow’s engineers. In addition, scores of junior high and high school students choose 78
Computer
• The New Jersey Institute of Technology’s Department of Biomedical Engineering sponsors a Pre-Engineering Instructional and Outreach Program, featuring Stiquito robot building (www.njit.edu/old/PreCollege/PrE-IOP/events. php). • Texas A&M has used Stiquito in its Introduction to Engineering and Problem Solving course (ENGR111) (http://crcd.tamu.edu/ curriculum/engr111/stiquito/index.php). • Western Michigan uses Stiquito in ECE 123 Mobile Robotics: An Introduction to Electrical and Computer Engineering (http://homepages. wmich.edu/~miller/ECE123.html). • Numerous projects at Penn State have used Stiquito (www.me.psu.edu/me415/fall99/ stiquito/intro.html). • High school student Max Eskin used Stiquito to perform gait experimentation (www.computer. org/books/stiquito/eskin.html). The original Stiquito robot left it to users to design their own control circuitry. While this offered an excellent opportunity for innovation in robot control, more assistance for users (and their overworked instructors) would have been helpful. What the robot needed was supplied hardware for an embedded system implementation that included detailed control design.
EMBEDDED SYSTEMS DESIGN AND EDUCATION Embedded systems are everywhere around us in everyday items such as microwave ovens, cell phones, and automobiles. One important characteristic of an embedded system is that the device contains a microprocessor purchased as part of some other piece of equipment. Other characteristics of embedded systems are that they often have • dedicated software (possibly user-customizable) that performs limited, simple functions; • a limited (or no general-purpose) display device; • no “real” keyboard; and • replaced electromechanical components. The heart of an embedded system is either a
microprocessor or a microcontroller. Both of these electronic devices run software and perform computations. The primary difference is that a microcontroller device typically has a microprocessor and other peripheral devices on the same chip: These peripherals can include permanent memory (like ROM, EEPROM, or flash), temporary memory storage (RAM), timing circuitry, analog-to-digital conversion circuitry, and communications circuitry. Embedded systems are the largest and fastestgrowing sector of the worldwide microprocessor industry, which constitutes approximately 99.99 percent of the total unit volume in microprocessors (the remaining 0.01 percent are used in general-purpose computers). The greater growth of embedded systems has occurred because the number of microprocessors used in personal computers is small compared to all of the microprocessors and microcontrollers used in non-PC products. Consider that the average home contains 30 to 100 processors, of which only five are within the home PC. There are microprocessors and microcontrollers inside automobiles, televisions, VCRs, DVD players, ovens, and even stove vent hoods.4 Analysts estimate that embedded systems are in more than 90 percent of worldwide electronic devices. The opportunities for embedded systems developers are bright, considering that more and more products contain more sophisticated electronics. In fact, today’s automobiles contain more than 200 pounds (91 kgs) of electronics; the current 7-Series BMW and S-Class Mercedes each contain about 100 processors.5 Although embedded systems design has been an active field for decades, it is only now being incorporated in computer science and computer engineering curricula. Few colleges and universities offer such courses, and few textbooks are available to use in the classroom. To address this problem, the IEEE Computer Society and the ACM formed a joint task force to develop guidelines for a computer engineering curriculum6 in which embedded systems is a core discipline or “knowledge area.” The task force stated that Computer engineering must include appropriate and necessary design and laboratory experiences. A computer engineering program should include “handson” experience in designing, building, and testing both hardware and software systems.
The task force recommends a minimum of 20 hours of core lectures in embedded systems and
Figure 2. Stiquito II.The robot has two degrees of freedom, using Flexinol wire both to pull the legs back and to raise them.
suggests that the courses cover other elective topics in embedded systems as well.
USING STIQUITO TO TEACH EMBEDDED SYSTEMS DESIGN The roots of embedded system control of Stiquito are based on the board developed by Nanjundan Mohon in 1993.2 This board contained a Motorola 68HC11 microcontroller (with EPROM memory), transistors to drive current to each Flexinol leg independently, and an infrared sensor. The software controlled the legs and accepted signals on the infrared sensor to change the gait. This implementation was only a one-degree-of-freedom controller. Jonathan Mills had observed a two-degrees-offreedom robot in 1995, and realized that this form of locomotion was needed to make it walk quickly. June 2005
79
potentiometer. The converter is active only when a speed measurement is being performed. This maximizes the time that the microcontroller can remain in low-power standby mode. • Serial onboard programming (JTAG). Developers can use the free software that Texas Instruments provides to control microprocessor operations by stepping through code and watching variables. • 4 Kbytes of flash memory. This is more than sufficient memory for the code required for Stiquito’s movement. In fact, nearly 70 percent of this memory is free after programming the standard software, leaving plenty of room for adding new features and functionality.
Figure 3. An assembled Stiquito Controlled robot with some of the tools needed for assembly.
Many students had worked on autonomous twodegrees-of-freedom robots, but none was successful until 2001, when several student groups built robots for a class and entered them in a race. These robots, while successful, were based on the Parallax Basic Stamp 2 microcontroller, an easy-to-use but expensive platform. What was needed was a lowcost yet easy-to-use implementation. In 2003, a senior design team at North Carolina State University was given the requirements for a Stiquito controller board. After investigating lowcost microcontrollers by Microchip (PIC), Renesas, Texas Instruments, and others, the team decided that the TI MSP430 was an appropriate choice. Using this device, they developed a design and successfully demonstrated a breadboard controller. Through additional prototyping and research, the TI MSP430F1122 microprocessor was chosen for use with the Stiquito robot, shown in Figure 3. This microcontroller’s useful features include the following:7 • Low power consumption of about 200 µA in active mode and 0.7 µA in standby mode. Each of these features is important to the microcontrolled Stiquito robot, which is designed to operate on battery power. The MSP430’s low power modes allow the processor to sit in an idle state as desired while consuming minimum power. • The 10-bit analog-to-digital converter. The A/D converter allows controlling the robot’s speed by reading a voltage from the onboard 80
Computer
In addition to the TI microcontroller, the other main hardware components used for the Stiquito board include the following:8 • ULN2803AFW transistor (Darlington Sink Driver): This transistor amplifies the microcontroller’s output current, which is too weak to drive the Flexinol actuator wire that controls Stiquito’s movement. • LEDs: Two LEDs indicate the source output presence and the synchronization of Stiquito’s movement with the current; a third LED serves as a power indicator. • Potentiometer: The potentiometer controls the speed of Stiquito’s movement by changing the resistance applied to the input voltage; the microcontroller detects the change in voltage and adjusts the robot’s speed.
HANDS-ON EDUCATION Most of us learn more effectively by participating in hands-on activities, and extensive educational literature supports the integration of classroom learning with laboratory experience.9-11 Stiquito’s low-cost embedded system board and free Texas Instruments Integrated Development Environment (IDE) compiler provide a readily available platform that students can use to learn embedded systems concepts. Students can perform exercises in the lab or on a home PC. While only an inexpensive cable is needed to conduct typical exercises, a laboratory setup including networked PCs, bench power supplies, multimeters, and mixed-signal oscilloscopes is useful to teach instrumentation concepts as well. The Stiquito controller board has been used in laboratory assignments at the University of North
Carolina at Charlotte to give students experience in applying the design techniques discussed in lectures. Some exercises have included the following: • software development using IDEs for design, coding, and debugging; • hardware skills such as soldering, interfacing, design, and instrumentation; and • power control and budgeting. An experiment on timers and interrupts illustrates the skills and equipment required for a lab exercise. Students attach a mixed-signal oscilloscope to their microcontroller board to examine output ports while monitoring the entry and exit from interrupt service routines and the activation of hardware timers. Such an exercise is not possible without an edge-triggered oscilloscope with fairly deep digital storage. Stiquito also can be used in interdisciplinary activities. For example, in 2005, the UNC Charlotte Mechanical Engineering robotics class built Stiquito Controlled robots for a class exercise.8 To further understand and appreciate the complexity of robotic design, mechanical engineering students were paired with embedded systems design students during another lab exercise. The mechanical engineers learned to build, compile, and download a new version of Stiquito code to the controller board. The embedded systems students assisted the mechanical engineers during the lab, but did not do the actual work for them.
FUTURE OF STIQUITO One of the most important aspects of the Stiquito robot is its expandability. Just as the original design was expanded to a microcontrolled version, the new design also can be enhanced. There are open ports for additional sensors or output devices. Examples would be adding a proximity sensor to stop the walking motion when the robot reaches a wall or installing a radio transmitter to pass information to other Stiquitos or a base station. We are currently working to reduce Stiquito to one-half its present size. This robot will rely on a printed circuit board for the mechanical body but will keep the same embedded systems architecture. The smaller size will also mean less current consumption. We will also work on designing a JTAG interface that connects to a PC via the USB port, since newer PCs no longer have parallel port interfaces.
s it possible to build a better Stiquito? Certainly it is. Everything can be improved. A fourth book that incorporates chapters written by contributors who have experimented with Stiquito will also include more educational activities and design applications. ■
I
References 1. Dynalloy, “Introduction to Flexinol;” www.dynalloy. com/AboutFlexinol.html. 2. J.M. Conrad and J.W. Mills, Stiquito: Advanced Experiments with a Simple and Inexpensive Robot, IEEE CS Press, 1997. 3. J.M. Conrad and J.W. Mills, Stiquito for Beginners: An Introduction to Robotics, IEEE CS Press, 1999. 4. J. Ganssle, “Born to Fail,” Embedded Systems Programming, Dec. 2002; www.embedded.com/ showArticle.jhtml?articleID=9900877. 5. J. Turley, “Motoring with Microprocessors,” Embedded Systems Programming, Aug. 2003; www. embedded.com/showArticle.jhtml?articleID=1300 0166. 6. IEEE Computer Society/ACM Task Force on Computing Curriculum, “Computing Curricula—Computer Engineering, Final Report,” 12 Dec. 2004; http://www.eng.auburn.edu/ece/CCCE/. 7. Texas Instruments, “Datasheet: MSP430F1122;” http://focus.ti.com/docs/prod/folders/print/msp430f1 122.html. 8. J.M. Conrad, Stiquito Controlled! IEEE CS Press, 2005. 9. K. Behnam, “Development of an Undergraduate Structured Laboratory to Support Classical and New Base Technology Experiments in Communications,” IEEE Trans. Education, Feb. 1994, pp. 97-105. 10. V.E. DeBrunner et al., “The Telecomputing Laboratory: A Multipurpose Laboratory,” IEEE Trans. Education, Nov. 2001, pp. 302-310. 11. J.M. Conrad and J. Brickley, “Using Stiquito in an Introduction to Engineering Skills and Design Course,” Proc. 27th Ann. Conf. Frontiers in Education, Stipes Publishing LLC, 1997, pp. 1212-1214.
James M. Conrad is an associate professor in the Department of Electrical and Computer Engineering at the University of North Carolina-Charlotte. His research interests include embedded systems, wireless communications, and robotics. Conrad received a PhD in computer engineering from North Carolina State University. He is a member of the IEEE Computer Society. Contact him at
[email protected]. June 2005
81
COMPUTER SOCIETY CONNECTION
Kanai Award Honors Work in Distributed Systems he Computer Society recently recognized two pioneers in the field of distributed computing for their groundbreaking work. Each has advanced new ideas and strategies that have helped to make distributed computing applications a prominent feature of the modern digital landscape.
T
DREAM KERNEL DEVELOPER RECEIVES 2005 KANAI AWARD
K.H. Kim
K.H. (Kane) Kim, a professor of electrical engineering and computer
science at the University of California, Irvine, recently received the 2005 IEEE Computer Society Tsutomu Kanai Award. Kim’s citation reads, “For fundamental and pioneering contributions to the scientific foundation of both real-time object structuring-based distributed computing and real-time faulttolerant distributed computing.” A member of the ACM and an IEEE Fellow, Kim has been recognized by the Computer Society on several occasions. A former chair of the Society’s Technical Committee on Distributed Processing, Kim received a Technical Achievement Award in 1998 and a Meritorious Service Award in 1995. Kim originated the distributed recovery block technique and several other basic approaches for cost-effective design of ultrareliable fault-tolerant, real-time, distributed and parallel computer systems. The primary developer of the TMO (time-triggered message-
triggered object) structuring scheme (also called RTO.k), Kim is also credited with developing the DREAM kernel, a prototype OS kernel providing guaranteed timely services. In the mid-1980s, Kim founded UC Irvine’s Distributed Real-time EverAvailable Microcomputing (DREAM) Laboratory. The DREAM Lab, equipped with three advanced parallel and distributed computing testbeds, evaluated TMO-structured real-time system engineering and other techniques for fault tolerance in distributed computer systems. Kim is a partner in the OptIPuter project, a proposed infrastructure that coordinates computational resources over parallel optical networks using existing IP communication mechanisms. An active volunteer in the Computer Society, Kim was founding editorial board member of IEEE Transactions on Parallel and Distributed Systems.
IEEE Computer Society and Samsung SDS Sign Memorandum of Understanding In an April meeting at Samsung corporate headquarters in Seoul, officials from the IEEE Computer Society and Samsung SDS, the digital systems division of the global electronics manufacturer, signed a memorandum of understanding that outlines the framework of a new strategic partnership between the two organizations. “This MOU opens the door to providing IEEE Computer Society products and services featuring the Certified Software Development Professional program to a new audience in Asia,” said David Hennage, executive director of the Computer Society, who signed the document on behalf of the Society. “Company executives tell us they need standards, training, and certification to make an impact on product and service quality for their customers. Testing and quality assurance are just some of the critical issues that together we can help address.” “By adopting the CSDP certification program and its related content,” Hennage continued, “Samsung SDS can
82
Computer
fortify our development force’s competency and at the same time, by marketing the certification program in Korea, both institutions can find a way to create mutually beneficial business opportunities. We are sure that this will bring fruitful results to both institutions.” Initially, 30 software developers from Samsung SDS will take the CSDP exam under the new agreement. Later in the year, the organizations will meet to formalize the structure of their partnership. Samsung SDS started as a business branch within Samsung Electronics in the late 1980s. Since then, it has offered a full spectrum of IT services, including consulting, design, development, production, installment, operation, maintenance, fee collection systems, and traffic management administration. For more information on the memorandum of understanding or on the Computer Society’s CSDP program, contact Stacy Saul at
[email protected].
JAMES GOSLING WINS 2004 KANAI AWARD
James Gosling
In a special ceremony at ISADS 2004, Java innovator James Gosling received the 2004 IEEE Computer Society Tsutomu Kanai Award “for major contributions to advances in the technology for construction of distributed computing systems through invention of the Java Language system.” Sun Microsystems designed Java in the early 1990s in anticipation of a trend in digital technology that might require connecting many household machines together. A lack of consumer demand doomed this application. Sun officials then attempted to promote the technology to cable TV companies, but executives were intimidated by the system’s interactive nature. However, in the mid-1990s, the emergence of the World Wide Web resulted in an unexpected explosion of applications for Java. The language has remained popular since, evolving through several iterations over the past decade. One of the three originators on the project that led to the creation of Java, Gosling is currently a vice president and Fellow at Sun Microsystems. As Chief Technology Officer of the Developer Products group, he leads a team working to create a high-end tool that performs semantic modeling tasks for software developers. Designed to deal with today’s massively complex systems, semantic models can give developers sophisticated insight into the structure of an application. At Sun, Gosling has built a multiprocessor version of Unix, as well as several compilers. He also headed the development of a windows manager called Network-extensible Windowing
System (NeWS), a PostScript-based system for distributing computer processing power across a network. Emacs, a popular and powerful text editor for Unix systems, is another of Gosling’s creations. In 2004, Gosling was elected to the National Academy of Engineering for his work on Java. After the momentum Java received from Internet sites, the language has evolved into a nearly ubiquitous fixture of the computing landscape, powering everything from cell phones and PDAs to on-board
automotive applications. In Brazil, a Java-based national healthcare system links 12 million people in 44 cities.
he Kanai award recognizes major contributions to state-of-the-art distributed computing systems and their applications. A crystal memento and a $10,000 honorarium accompany the award, which is presented each year at the International Symposium on Autonomous Decentralized Systems. ■
T
Computer Society Offers Wide Range of Awards
T
he IEEE Computer Society sponsors a program of awards that is designed to recognize technical achievements and service to both the society and the profession. Technical awards are given for pioneering and significant contributions to the field of computer science and engineering. Service awards may be presented to volunteers or to staffers for welldefined and highly valued contributions to the society. In most cases, there are no eligibility restrictions on either the nominee or the nominator.
STROUSTRUP GARNERS 2004 COMPUTER ENTREPRENEUR AWARD
Bjarne Stroustrup
Texas A&M’s Bjarne Stroustrup, the designer and original implementer of the widely used C++ programming language, recently received the 2004
Computer Entrepreneur Award. His citation reads, “For pioneering the development and commercialization of industrial-strength, object-oriented programming technologies and the profound changes they fostered in business and industry.” While at Bell Labs, Stroustrup developed C++. The language originated as an extension of the C programming language, retaining most of C’s efficiency and flexibility but also supporting features that C alone could not. In particular, C++ was designed to support object-oriented programming, incorporating the concept of classes from older languages. Originally designed for use in Unix environments, C++, like C before it, can be used with any OS. Currently the College of Engineering Chair Professor of Computer Science at Texas A&M University, Stroustrup retains ties with Bell Labs (now known as AT&T Labs-Research). Elected to the National Academy of Engineering in 2004, Stroustrup is also a Fellow of the IEEE, the ACM, and AT&T. In 2005, he became the first computer scientist ever to be awarded the William Procter Prize for Scientific Achievement from the Sigma Xi scientific research society. June 2005
83
Computer Society Connection
Past recipients of the Computer Entrepreneur Award include Gene Amdahl, Daniel Bricklin, Michael Dell, Bill Gates, Andrew Grove, and Steve Jobs.
HANS KARLSSON AWARDS HONOR STANDARDS CONTRIBUTIONS While some Computer Society awards, like the Entrepreneur Award, recognize individual innovation, others, like the Hans Karlsson Award, honor leadership through collaboration. Recognizing that individual, corporate, and organizational rivalries within the computer industry sometimes hinder the common good, the Computer Society established the Hans Karlsson Award for Leadership and Achievement through Collaboration.
Wayne Hodgins Wayne Hodgins of the design software and digital content company Autodesk received the Hans Karlsson Award in recognition of his work on educational technology standards. His citation reads, “For your extraordinary leadership and vision that led to the first learning technology standard and that was instrumental in moving an entire industry to pursue a standards-based rather than a proprietary approach.” Hodgins is the elected chair of the IEEE P1484 Standards Working Group for Learning Object Metadata, a focus group of the IEEE Learning Technology Standards Committee. In his role as Director of Worldwide Learning Strategies at Autodesk, Hodgins is responsible for improving interactions among employees, partners, and customers through what he refers to as learnativity.
Victor Hayes For his “dedication to the advancement of technologies and their use in a wide area of segments, markets, and applications benefiting all our lives,” Victor Hayes also received the Hans Karlsson Award. The award recognizes his long-standing involvement with the 84
Computer
IEEE 802.11 wireless standard. Hayes chaired the IEEE 802.11 Wireless LAN Working Group, a subcommittee of the IEEE 802 LAN/MAN Standards Committee, at the time of the 802.11 standard’s release in 1996. Broad adoption of 802.11-compliant technology has pushed down prices of wireless Web access, service, and equipment, leading to widespread wireless Internet connectivity at hotels, airports, cafes, businesses, and homes. Hayes is widely hailed as the champion of the 802.11 standard. Hayes spent most of his career at Lucent Technologies and retired from his position as senior scientist at Agere Systems in 2003.
BOEING AND IBM EARN SOFTWARE PROCESS ACHIEVEMENT AWARDS In addition to recognizing the leadership of individuals in promoting collaboration, the Computer Society presents awards that praise the collaborative efforts of work groups. The Software Process Achievement Award highlights innovation demonstrated by an individual or team responsible for an improvement to their organization’s software process. To be considered for this award, the improvement must be sustained, measured, and significant. In 2004, the Software Process Engineering group at Boeing, and the Global Services Application Management group at IBM both received Software Process Achievement Awards. Boeing received the honor “in recognition of significant, measured software process improvements throughout the Information Systems Division and establishment of a firm basis for continued improvements into the future.” The award was presented to the IBM team “in recognition of rapid, continuous, improvement to their software capability in response to increasingly stringent marketplace demands.” Past winners of the Software Process Achievement Award include teams from Wipro Technologies, Hughes, and Raytheon.
DISTINGUISHED SERVICE IN A PRE-COLLEGE ENVIRONMENT AWARD Because excellence in engineering often begins with early exposure to the field, the Computer Society presents an award to individuals who further the professional and technical goals of the IEEE Computer Society in a pre-college environment. Robert A. Reilly received this recognition for his role in founding the K12Net bulletin board system, an early tool for connecting teachers and students to the emerging Internet. Established in 1990—before most K-12 teachers, students, and administrators had heard of the Internet— K12Net was an education-focused bulletin board system. By the end of its brief run, tens of thousands of teachers, children, and parents had experienced online computing. Although K12Net’s international growth was explosive, the FIDOnet technology that it relied upon soon became obsolete. By 1997, as the World Wide Web and inexpensive local access to the Internet started to become more widespread, nearly all K12Net BBSs had disappeared. Reilly, a senior member of the IEEE, is a visiting scientist at the Massachusetts Institute of Technology Media Lab, where he is investigating the role of emotions and their impact on learning.
he IEEE Computer Society maintains an active awards program, presenting dozens of annual awards that can carry honoraria of up to $10,000. The deadline to nominate peers for most awards is 1 October. For details on individual award criteria, listings of past winners, and nomination forms for upcoming awards, visit www.computer.org/awards/. ■
T
Editor: Bob Ward, Computer;
[email protected]
CALL AND CALENDAR
CALLS FOR IEEE CS PUBLICATIONS IEEE Annals of the History of Computing is planning a JanuaryMarch 2007 special issue on the computer communications services and technologies that existed before the development of the Internet. Annals seeks papers on bulletin boards, dialup servers, and communications packages on time-share systems; store and forward networks, pre-OSI communications protocols, Bitnet, Csnet, Telenet, Prodigy, Fidonet, CompuServe, and other networks; early use of Listserv and similar protocols; and any technology providing a service that has since been replaced by software running on the Internet. Interested authors should contact Annals Editor in Chief David Alan Grier, at
[email protected]. Abstracts are due by 1 September. To view the complete call for papers, visit www. computer.org/portal/pages/annals/ content/cfp07.html. IEEE Software magazine plans a July/August 2006 special issue on software verification and validation techniques. Software seeks papers on topics that include the automation of software testing, experiences in testing process improvement, testing metrics, and best practices for testing in specific domains.
Submission Instructions The Call and Calendar section lists conferences, symposia, and workshops that the IEEE Computer Society sponsors or cooperates in presenting. Complete instructions for submitting conference or call listings are available at www.computer. org/conferences/submission.htm. A more complete listing of upcoming computer-related confeences is available at www.computer.org/ conferences/.
Software focuses on providing its readers with practical and proven solutions to real-life challenges. Complete author instructions are available at www.computer.org/ software/author.htm#Submission. Submissions are due by 1 November. View the complete call for papers at www. computer.org/software/edcal.htm.
12-15 July: SCC 2005, IEEE Int’l Conf. on Services Computing (with ICWS 2005), Orlando, Fla. http:// conferences.computer.org/scc/2005/ 18-19 July: WMCS 2005: 2nd IEEE Int’l. Workshop on Mobile Commerce & Services (with CEC 2005), Munich. www.mobile.ifi.lmu.de/Conferences/ wmcs05/
OTHER CALLS IEEE VR 2006, IEEE Virtual Reality Conf., 25-29 Mar. 2006, Alexandria, Va. Submissions due 4 September. www.vr2006.org/cfp.htm#paper DSN 2006, Int’l Conf. on Dependable Systems & Networks, 25-28 June 2006, Philadelphia. Abstracts due 18 November. www.dsn2006.org/
CALENDAR JULY 2005 5-8 July: ICALT 2005, 5th IEEE Int’l Conf. on Advanced Learning Technologies, Kaohsiung, Taiwan. www. ask.iti.gr/icalt/2005/ 6-8 July: ICME 2005, IEEE Int’l Conf. on Multimedia & Expo, Amsterdam. www.icme2005.com/ 6-8 July: IOLTS 2005, 11th IEEE Int’l On-Line Testing Symp., Saint Rafael, France. http://tima.imag.fr/conferences/ IOLTS/iolts05/Index.html
19-22 July: CEC 2005, 7th Int’l IEEE Conf. on E-Commerce Technology, Munich. http://cec05.in.tum.de/ 20-21 July: WRTLT 2005, Workshop on RTL & High-Level Testing, Harbin, China. http://wrtlt05.hit. edu.cn/ 20-22 July: ICPADS 2005, 11th Int’l Conf. on Parallel & Distributed Systems, Fukuoka, Japan. www.takilab. k.dendai.ac.jp/conf/icpads/2005/ 23-25 July: ASAP 2005, IEEE 16th Int’l Conf. on Application-Specific Systems, Architectures, & Processors, Samos, Greece. www.ece.uvic.ca/ asap2005/ 24 July: CLADE 2005, Workshop on Challenges of Large Applications in Distributed Environments (with HPDC-14), Research Triangle Park, N.C. www.cs.umd.edu/CLADE2005/
11-14 July: ICPS 2005, IEEE Int’l Conf. on Pervasive Services, Santorini, Greece. www.icps2005.cs.ucr.edu/
24-27 July: HPDC-14, 14th IEEE Int’l Symp. on High-Performance Distributed Computing, Research Triangle Park, N.C. www.caip.rutgers. edu/hpdc2005/
11-14 July: MemoCode 2005, 3rd ACM-IEEE Int’l Conf. on Formal Methods & Models for Codesign, Verona, Italy. www.irisa.fr/ manifestations/2005/MEMOCODE/
26-28 July: Compsac 2005, 29th Ann. Int’l Computer Software & Applications Conf., Edinburgh http://aquila. nvc.cs.vt.edu/compsac2005/
12-15 July: ICWS 2005, 3rd IEEE Int’l Conf. on Web Services, Orlando, Fla. http://conferences.computer.org/icws/ 2005/
27-29 July: NCA 2005, 4th IEEE Int’l Symp. on Network Computing & Applications, Cambridge, Mass. www. ieee-nca.org/
Published by the IEEE Computer Society
June 2005
85
Call and Calendar
AUGUST 2005 2-4 Aug: ICCNMC 2005, Int’l Conf. on Computer Networks & Mobile Computing, Zhangjiajie, China. www. iccnmc.org/ 4-5 Aug: MTDT 2005, IEEE Int’l Workshop on Memory Technology, Design, & Testing, Taipei, Taiwan. http://ats04.ee.nthu.edu.tw/~mtdt/ 8-10 Aug: ICCI 2005, 4th IEEE Int’l Conf. on Cognitive Informatics, Irvine, Calif. www.enel.ucalgary.ca/ ICCI2005/ 8-11 Aug: CSB 2005, IEEE Computational Systems Bioinformatics Conf., Palo Alto, Calif. http://conferences. computer.org/bioinformatics/ 14-16 Aug: Hot Chips 17, Symp. on High-Performance Chips, Palo Alto, Calif. www.hotchips.org/ 17-19 Aug: RTCSA 2005, 11th IEEE Int’l Conf. on Embedded & Real-Time Computing Systems & Applications, Hong Kong. www.comp.hkbu.edu.hk/ ~rtcsa2005/ 29 Aug.-2 Sept: RE 2005, 13th IEEE Int’l Requirements Eng. Conf., Paris. http://crinfo.univ-paris1.fr/RE05/
SEPTEMBER 2005 7-9 Sept: SEFM 2005, 3rd IEEE Int’l Conf. on Software Eng. & Formal Methods, Koblenz, Germany. http:// sefm2005.uni-koblenz.de/ 12-14 Sept: IWCW 2005, 10th Int’l Workshop on Web Content Caching & Distribution, Sophia Antipolis, France. http://2005.iwcw.org/ 15-16 Sept: AVSS 2005, Conf. on Advanced Video & Signal-Based Surveillance, Como, Italy. www-dsp. elet.polimi.it/avss2005/ 17-21 Sept: PACT 2005, 14th Int’l 86
Computer
Conf. on Parallel Architectures & Compilation Techniques, St. Louis. www.pactconf.org/pact05/
lation of Computer & Telecomm. Systems, Atlanta. www.mascotsconference.org/
18-21 Sept: CDVE 2005, 2nd Int’l Conf. on Cooperative Design, Visualization, & Eng., Palma de Mallorca, Spain. www.cdve.org/
27-30 Sept: Cluster 2005, IEEE Int’l Conf. on Cluster Computing, Boston. www.cluster2005.org/
19-21 Sept: CODES + ISSS 2005, Int’l Conf. on Hardware/Software Codesign & System Synthesis, Jersey City, N.J. www.codes-isss.org/ 19-22 Sept: Metrics 2005, 11th IEEE Int’l Software Metrics Symp., Como, Italy. http://metrics2005.di.uniba.it/ 19-22 Sept: WI-IAT 2005, IEEE/ WIC/ACM Int’l Joint Conf. on Web Intelligence & Intelligent Agent Technology, Compiegne, France. www. comp.hkbu.edu.hk/WI05/ 19-23 Sept: EDOC 2005, 9th Int’l Conf. on Enterprise Computing, Enschede, Netherlands. http:// edoc2005.ctit.utwente.nl/ 20-22 Sept: WRAC 2005, 2nd IEEE/ NASA/IBM Workshop on Radical Agent Concepts, Greenbelt, Md. http:// aaaprod.gsfc.nasa.gov/WRAC/home. cfm
30 Sept.-1 Oct: SCAM 2005, 5th IEEE Int’l Workshop on Source Code Analysis & Manipulation (with ICSM), Budapest. www.dcs.kcl.ac.uk/ staff/mark/scam2005/
OCTOBER 2005 2-5 Oct: ICCD 2005, Int’l Conf. on Computer Design, San Jose, Calif. www.iccd-conference.org/ 2-7 Oct: MoDELS 2005, 8th IEEE/ ACM Int’l Conf. on Model-Driven Eng. Languages & Systems (formerly UML), Montego Bay, Jamaica. www. umlconference.org/ 3-5 Oct: DFT 2005, 20th IEEE Int’l Symp. on Defect & Fault Tolerance in VLSI Systems, Monterey, Calif. www3.deis.unibo.it/dft2005/ 3-7 Oct: BroadNets 2005, 2nd IEEE Int’l Conf. on Broadband Networks, Boston. www.broadnets.org/
21-24 Sept: VL/HCC 2005, IEEE Symp. on Visual Languages & Human-Centric Computing, Dallas. http://viscomp.utdallas.edu/vlhcc05/
5-8 Oct: ISMAR 2005, 4th IEEE & ACM Int’l Symp. on Mixed & Augmented Reality, Vienna. www. ismar05.org/
23-24 Sept: ISoLA 2005, Workshop on Leveraging Applications of Formal Methods, Verification, & Validation, Columbia, Md. www.technik.unidortmund.de/tasm/isola2005/html/ index.html
7-8 Oct: GridNets 2005, 2nd Int’l Workshop on Networks for Grid Applications (with BroadNets 2005), Boston. www.gridnets.org/
25-30 Sept: ICSM 2005, 21st IEEE Int’l Conf. on Software Maintenance, Budapest. www.inf.u-szeged.hu/ icsm2005/ 26-29 Sept: MASCOTS 2005, Int’l Symp. on Modeling, Analysis, & Simu-
6-8 Oct: WWC 2005, IEEE Int’l Symp. on Workload Characterization, Austin, Texas. www.iiswc.org/ iiswc2005/ 10-12 Oct: DS-RT 2005, 9th IEEE Int’l Symp. on Distributed Simulation & Real-Time Applications, Montreal. www.cs.unibo.it/ds-rt2005/
12-14 Oct: HASE 2005, 9th IEEE Int’l Symp. on High-Assurance Systems Eng., Heidelberg, Germany. http:// hase.informatik.tu-darmstadt.de/ 15-21 Oct: ICCV 2005, 10th IEEE Int’l Conf. on Computer Vision, Beijing. www.research.microsoft.com/ iccv2005/ 17-19 Oct: BIBE 2005, IEEE 5th Symp. on Bioinformatics & Bioeng., Minneapolis. www.bibe05.org/ 18-20 Oct: ICEBE 2005, IEEE Int’l Conf. on e-Business Eng., Beijing. www.cs.hku.hk/icebe2005/ 18-21 Oct: ISWC 2005, 9th Int’l Symp. on Wearable Computers, Osaka, Japan. www.cc.gatech.edu/ ccg/iswc05/ 19-21 Oct: AIPR 2005, 34th IEEE Applied Imagery Pattern Recognition Workshop, Washington, D.C. www. aipr-workshop.org/
NOVEMBER 2005 2-4 Nov: MTV 2005, 6th Int’l Workshop on Microprocessor Test & Verification, Austin, Texas. http://mtv. ece.ucsb.edu/MTV/ 6-9 Nov: ICNP 2005, 13th IEEE Int’l Conf. on Network Protocols, Boston. http://csr.bu.edu/icnp2005/ 6-10 Nov: ICCAD 2005, IEEE/ACM Int’l Conf. on Computer-Aided Design, San Jose, Calif. www.iccad.com/ 7-10 Nov: MASS 2005, 2nd IEEE Int’l Conf. on Mobile Ad Hoc & Sensor Systems, Washington, D.C. www. mass05.wpi.edu/ 7-11 Nov: ASE 2005, 20th IEEE/ACM Int’l Conf. on Automated Software Eng., Long Beach, Calif. www.ase-conference.org/ 8-10 Nov: ITC 2005, Int’l Test Conf., Austin, Texas. www.itctestweek.org/
19-22 Oct: FIE 2005, Frontiers in Education Conf., Indianapolis, Ind. http://fie.engrng.pitt.edu/fie2005/
8-11 Nov: ISSRE 2005, 16th IEEE Int’l Symp. on Software Reliability Eng., Chicago. www.issre.org/
19-22 Oct: Tapia 2005, Richard Tapia Celebration of Diversity in Computing Conf., Albuquerque, N.M. www.ncsa. uiuc.edu/Conferences/Tapia2005/
10-11 Nov: SDD 2005, 2nd IEEE Int’l Workshop on Silicon Debug & Diagnosis, Austin, Texas. http://evia. ucsd.edu/conferences/sdd/05/
23-25 Oct: FOCS 2005, 46th Ann. IEEE Symp. on Foundations of Computer Science, Pittsburgh. www.cs. cornell.edu/Research/focs05/
12-16 Nov: Micro 2005, 38th ACM/IEEE Int’l Symp. on Microarchitecture, Barcelona, Spain. http:// pcsostres.ac.upc.edu/micro38/
23-28 Oct: IEEE Visualization 2005, Minneapolis. http://vis.computer.org/ vis2005/
12-18 Nov: SC 2005, Seattle. http:// sc05.supercomputing.org/
26-28 Oct: ANCS 2005, Symp. on Architectures for Networking & Comm. Systems, Princeton, N.J. www. cesr.ncsu.edu/ancs 26-28 Oct: SRDS 2005, 24th Int’l Symp. on Reliable Distributed Systems, Orlando, Fla. http://srds05.csee.wvu.edu/
14-16 Nov: ICTAI 2005, 17th Int’l Conf. on Tools with AI, Hong Kong. http://ictai05.ust.hk/ 15-17 Nov: LCN 2005, 30th IEEE Conf. on Local Computer Networks, Sydney, Australia. www.ieeelcn.org/
4th Int’l Symp. on Empirical Software Eng., Noosa Heads, Australia. http:// attend.it.uts.edu.au/isese2005/ 26-30 Nov: ICDM 2005, 5th IEEE Int’l Conf. on Data Mining, New Orleans. www.cacs.louisiana.edu/ ~icdm05/ 28-30 Nov: WMTE 2005, 3rd IEEE Int’l Workshop on Wireless & Mobile Technologies in Education, Tokushima, Japan. http://lttf.ieee.org/ wmte2005/
DECEMBER 2005 5-8 Dec: RTSS 2005, 26th IEEE RealTime Systems Symp., Miami Beach, Fla. www.rtss.org/
HASE 2005: 9th IEEE International Symposium on High-Assurance Systems Engineering The annual IEEE High-Assurance Systems Engineering Symposium brings together experts from academia and industry to discuss current issues in service-critical, high-assurance systems. These types of applications include complex vehicular systems, military command and control centers, nuclear reactors, telecommunications networks, and critical e-commerce applications. HASE 2005 will take place from 12-14 October in Heidelburg, Germany. Conference papers are expected to address topics that include high-assurance system and software designs, real-time systems and services, and model-based assurance evaluations. Highlighted events at HASE 2005 include panel discussions, demonstrations, focused workgroups, and case study presentations. For further information on HASE 2005, including venue and program details, visit http://hase.informatik. tu-darmstadt.de/.
17-18 Nov: ISESE 2005, ACM-IEEE June 2005
87
CAREER OPPORTUNITIES
AIR FORCE OFFICE OF SCIENTIFIC RESEARCH, Chief Information Officer. Air Force Office of Scientific Research (AFOSR), the single manager of the United States Air Force basic research program, has a position open for a Chief Information Officer with a strong background in computer science and telecommunications. This position involves managing, operating and planning the AFOSR Local/Wide area networks, and business systems in support of the AFOSR operations. Develops AFOSR’s IT strategic and tactical plans to include milestones, spending plans and execution plans. Manages and oversees the development, operation and maintenance of AFOSR Management Information System. Ensures AFOSR’s IT infrastructure adheres to all Department of Defense, Air Force, Air Force Materiel Command, and Air Force Research Laboratory IT directives, regulations and policies. AFOSR is located in the Ballston area of Arlington, Va, near the Office of Naval Research (ONR), the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA). The present annual salary ranges from $94,187 to $116,517, depending upon the individual qualifica-
tions and salary history. Candidates should have outstanding communication skills, the ability to work with demanding schedules and the interpersonal skills to work with a wide variety of customers. Applicant should have a strong background in computer science or a related field. Applicants must be U.S. citizens, be able to obtain a security clearance and to travel by military and civil aircraft, and comply with provisions of the Ethics in Government Act. Submit applications online at https://wrightpattjobs.wpafb.af. mil.
OLD DOMINION UNIVERSITY, Department Chair, Electrical and Computer Engineering. The Frank Batten College of Engineering and Technology at Old Dominion University invites applications for the ECE Chair. The ECE Department consists of 19 faculty members, and offers baccalaureate, masters and Ph.D. programs, with annual research expenditures exceeding $3M. Faculty are also heavily engaged in the College’s Applied Research Center, Center for Bioelectrics, and Virginia Modeling, Analysis and Sim-
CHAIR DEPARTMENT OF COMPUTING AND SOFTWARE McMaster University’s Faculty of Engineering is seeking a dynamic leader for its Department of Computing and Software. This is a tenured position at the Professor’s level. We are looking for an accomplished scholar who can provide academic and administrative leadership to the Department. Candidates should have a Ph.D. in software engineering or computer science, or related fields, excellent research and teaching record, record of strong external research funding, demonstrated administrative abilities, and registration or commitment to registration as a professional engineer. Excellent communication skills and demonstrated outreach to the community and profession are required. The Department has a complement of 28 faculty members. It offers undergraduate programs in Software Engineering and in Computer Science. It is offering one of the first accredited undergraduate software engineering programs in Canada. At the graduate level, the Department offers Master of Applied Science, Master of Engineering and Ph.D. programs in Software Engineering, and Master of Science and Ph.D. programs in Computer Science. The Department currently has 324 undergraduate and 85 graduate students. It has three Canada Research Chairs, and research initiatives include the Software Quality Research Laboratory, the Advanced Optimization Laboratory and the Algorithms Research Group. The Department is also spearheading the new School of Computational Engineering and Science at McMaster University. The Faculty of Engineering is one of the most research intensive Faculties of Engineering in Canada. It has a complement of 136 faculty members in seven Departments. All qualified candidates are encouraged to apply; however, Canadian citizens and permanent residents will be given priority. McMaster is strongly committed to employment equity within its community, and to recruiting a diverse faculty and staff. The University encourages applications from all qualified candidates, including women, members of visible minorities, Aboriginal persons, members of sexual minorities, and persons with disabilities. Applications and nominations should be forwarded: Dr. M. A. Elbestawi Dean, Faculty of Engineering McMaster University 1280 Main Street West, JHE 261 Hamilton, Ontario, Canada L8S 4L7 For more information visit our website: http://www.cas.mcmaster.ca/cas/ Candidates will be considered until the position is filled.
88
Computer
ulation Center. The selected candidate will have an earned doctorate in electrical or computer engineering or a related field, a record meriting appointment as a tenured full professor, demonstrated experience in securing external support, excellent communication skills, and a strong commitment to academic excellence and diversity in the faculty and student body. The Chair is expected to provide effective leadership in teaching, research, and professional service and have a development agenda for garnering external funding. The selected candidate is expected to build upon, plan, and shape the department’s core strengths, stimulate a collegial environment, promote academic excellence with top quality instruction, and to continue development of nationally recognized research. For more information about the department, see www.ece.odu.edu. The selected candidate will help attract and mentor outstanding new faculty, work with other departments and centers, promote partnerships with industry, and foster relationships with our alumni and local community. Salary is commensurate with qualifications and experience. The start date is January 1, 2006. Applicants should submit a vision statement and a cover letter addressing these desired attributes, a curriculum vita, and contact information of four references to ECE Search Committee Chair, 102 Kaufman Hall, Old Dominion University, Norfolk, Virginia, 23529-0236. email:
[email protected]. Screening will begin on July 1, 2005 and continue until the position is filled. Old Dominion University is an equal opportunity, affirmative action employer and requires compliance with the Immigration Reform and Control Act of 1986.
NETWORK ADMINISTRATOR (NY, NY) w/B.S. or Foreign Equivalent in Comp. Scie. or Engg. or Math. & 2 yrs exp to design & implement LAN, WAN networks using FDDI, SNA, ISDN, Data/Video & wireless networks. Implement protocols using OSPF, BGP, RIP, EIGRP, IGRP, TCP/IP, IPX/SPX, PPTP, NIS, NIS, + & APPLETALK. Evaluate & recommend equipment using switches, routers, firewalls & multiplexers. Send resumes to: Datavision Computer & Video, 445 5 Ave., NY, NY 10016.
KUWAIT UNIVERSITY, Physics Department. The Physics Department is seeking qualified candidates with strong commitment to high quality teaching to fill a vacancy in Engineering physics (Digital Systems). Candidates should be able to teach the major courses on offer in this program such as: (a) Fundamental and advanced digital electronics, (b) Principles of analogue electronics, (c) Micro-
°Ê °Ê-
>ÜÊ,iÃi>ÀV
Ê>`Ê iÛi«iÌ computer architecture (Intel based systems), (d) Assembly language (Intel devices), (e) Object-Oriented Programming languages (e.g. Java), (f) Digital Image and Signal processing. In addition to teaching the fundamental physics courses, the appointee will be expected to teach at both the undergraduate and graduate levels. Rank and salary will be commensurate with experience and qualifications. Appointment is subject to contract. General Requirements: (a) PhD or its equivalent, in the required areas above, (b) Research experience and publications, (c) University teaching experience (d) Excellent knowledge of English. Approximate Salaries and Benefits [Rank/Monthly Salary*]: [Professor/ 1735KD – 1895KD]; [Associate Professor/ 1385KD – 1545KD]; [Assistant Professor: 1095KD – 1255KD]. *Monthly salary = basic salary + professional, technical and social allowances. 1 KD = approx. US $3.25, UK £2.30. There is no income tax in Kuwait and currency is transferable without restriction. For more information see: http://www.vpaa.kuniv.edu.kw/ vpaa/GINFO.PDF. Monthly housing allowance (KD 350 or 450) and a one time furniture allowance (KD 3500 or 4500) depending on marital status. Education allowances for up to three children attending school in Kuwait. Baggage and freight allowance. 60 days (2 months) paid summer leave and two weeks midyear break. Annual round-trip air tickets. End-of-service gratuity. Free general medical care in Kuwait government hospitals. Attendance at one approved conference per year. Financial support for research through a competitive awards system. Excellent academic environment in a newly established faculty. The post is available immediately and is open until suitable candidates are appointed. Letters of application (preferably by Email) accompanied by a detailed curriculum vitae and the names, addresses, telephone and fax numbers, and e-mail addresses of three referees should be submitted to: The Dean, Faculty of Science, Kuwait University P.O. Box 5969, Safat, 13060, Kuwait. For inquiries use: Fax: (+965) 4819374; e-mail: arar@kuc01. kuniv.edu.kw.
-ÞÃÌiÃÊÀV
ÌiVÌÃÊ>`Ê- Ê }iiÀÃ -«iV>âi`Ê-Õ«iÀV«ÕÌiÀÊvÀ
«ÕÌ>Ì>Ê ÀÕ}Ê iÃ}
ÝÌÀ>À`>ÀÞÊ}vÌi`ÊÃÞÃÌiÃÊ>ÀV
ÌiVÌÃÊ>`Ê- Ê`iÃ}Ê >`ÊÛiÀwV>ÌÊi}iiÀÃÊ>ÀiÊÃÕ}
ÌÊÌÊ«>ÀÌV«>ÌiÊÊÌ
iÊ `iÛi«iÌÊvÊ>ÊëiV>«ÕÀ«ÃiÊÃÕ«iÀV«ÕÌiÀÊ`iÃ}i`Ê ÌÊvÕ`>iÌ>ÞÊÌÀ>ÃvÀÊÌ
iÊ«ÀViÃÃÊvÊ`ÀÕ}Ê`ÃVÛiÀÞÊ ÜÌ
ÊÌ
iʫ
>À>ViÕÌV>Ê`ÕÃÌÀÞ° /
ÃÊi>ÀÞÃÌ>}i]ÊÀ>«`ÞÊ }ÀÜ}Ê«ÀiVÌÊÃÊLi}Êw>Vi`ÊLÞÊÌ
iÊ °Ê °Ê-
>ÜÊ}ÀÕ«]Ê >ÊÛiÃÌiÌÊ>`ÊÌiV
}ÞÊ`iÛi«iÌÊwÀÊÜÌ
Ê>« «ÀÝ>ÌiÞÊ1-Êf£{ÊLÊÊ>}}Ài}>ÌiÊV>«Ì>°Ê/
iÊ«À iVÌÊÜ>ÃÊÌ>Ìi`ÊLÞÊÌ
iÊwÀ½ÃÊvÕ`iÀ]Ê À°Ê >Û`Ê °Ê-
>Ü]Ê >`Ê«iÀ>ÌiÃÊÕ`iÀÊ
ÃÊ`ÀiVÌÊÃViÌwVÊi>`iÀÃ
«° /
ÃÊ«ÀiVÌÊ>ÃÊÌÊVLiÊ>ÊÛ>ÌÛi]Ê>ÃÃÛiÞÊ «>À>iÊ>ÀV
ÌiVÌÕÀiÊVÀ«À>Ì}Êä>iÌiÀʺÃÞÃÌiÊ Ê>ÊV
«»Ê- ÃÊÜÌ
ÊÛiÊ>Ì
i>ÌV>ÊÌiV
µÕiÃÊ>`Ê }ÀÕ`LÀi>}Ê>}ÀÌ
VÊ>`Û>ViÃÊÊV«ÕÌ>Ì>ÊL V
iÃÌÀÞÊÌÊ`ÀiVÌÊÕ«ÀiVi`iÌi`ÊV«ÕÌ>Ì>Ê«ÜiÀÊÌ Ü>À`ÊÌ
iÊÃÕÌÊvÊiÞÊÃViÌwVÊ>`ÊÌiV
V>Ê«ÀLiÃÊÊ Ì
iÊwi`ÊvÊiVÕ>ÀÊ`iÃ}°Ê-ÕVViÃÃvÕÊV>``>ÌiÃÊÜÊLiÊ ÜÀ}ÊVÃiÞÊÜÌ
Ê>ÊÕLiÀÊvÊÌ
iÊÜÀ`½ÃÊi>`}ÊV «ÕÌ>Ì>ÊV
iÃÌÃÊ>`ÊL}ÃÌÃ]Ê>`ÊÜÊ
>ÛiÊÌ
iÊ««À ÌÕÌÞÊÌÊÞÊÌÊ«>ÀÌV«>ÌiÊÊ>ÊiÝVÌ}ÊiÌÀi«ÀiiÕÀ>Ê ÛiÌÕÀiÊÜÌ
ÊVÃ`iÀ>LiÊiVVÊ«ÌiÌ>]ÊLÕÌÊÌÊ>iÊ vÕ`>iÌ>ÊVÌÀLÕÌÃÊÜÌ
ÊÌ
iÊwi`ÃÊvÊ L}Þ]ÊV
iÃÌÀÞ]Ê>`Êi`Vi° /
iÊV>``>ÌiÃÊÜiÊÃiiÊÜÊLiÊÕÕÃÕ>ÞÊÌi}iÌÊ>`Ê >VV«Ã
i`]ÊÜÌ
Ê>Ê`iÃÌÀ>Ìi`Ê>LÌÞÊÌÊ`iÃ}Ê>`Ê «iiÌÊV«iÝ]Ê
}
«iÀvÀ>ViÊ
>À`Ü>ÀiÊÃÕÌÃÊ L>Ãi`ÊÊÌ
iÊ>ÌiÃÌÊÃiVÕÃÌÊÌiV
}iðÊ7iÊ>ÀiÊ«Ài «>Ài`ÊÌÊÀiÜ>À`ÊiÝVi«Ì>ÞÊÜiµÕ>wi`Ê`Û`Õ>ÃÊÜÌ
Ê >LÛi>ÀiÌÊV«iÃ>Ì°Ê *i>ÃiÊÃi`ÊÀiÃÕi]Ê>}ÊÜÌ
Ê*Ã]ÊÃÌ>`>À`âi`ÊÌiÃÌÊ ÃVÀiÃÊ-/]Ê, ®]Ê>`ÊV«iÃ>ÌÊ
ÃÌÀÞ]ÊÌÊ iiiV«ÕÌiÀ
ÜJ`iÃÀ>`°`iÃ
>Ü°V°
PRESIDENT & CEO, Melville, NY. Manage teams involved in dsgng & dvlpg comp graphic & info systms for broadcast, cable & satellite tv networks & data & media services; direct & supervise all day-to-day co. operations, incl sales, mktg, finance, personnel & legal matters; dvlp & implmt business strategies & organizational policies; identify & promote business initiatives; oversee & eval performance of managers & executives; review & approve expenditure budgets; eval business opportunities & proposi-
°Ê °Ê-
>ÜÊ,iÃi>ÀV
Ê>`Ê iÛi«iÌ]Ê Ê`iÃÊÌÊ`ÃVÀ>ÌiÊ Êi«ÞiÌÊ>ÌÌiÀÃÊÊÌ
iÊL>ÃÃÊvÊÀ>Vi]ÊVÀ]ÊÀi}]Ê}i`iÀ]Ê >Ì>ÊÀ}]Ê>}i]ÊÌ>ÀÞÊÃiÀÛViÊi}LÌÞ]ÊÛiÌiÀ>ÊÃÌ>ÌÕÃ]Ê ÃiÝÕ>ÊÀiÌ>Ì]Ê>ÀÌ>ÊÃÌ>ÌÕÃ]Ê`Ã>LÌÞ]ÊÀÊ>ÞÊÌ
iÀÊ«ÀÌiVÌi`Ê V>Ãð
June 2005
89
tions, & negotiate agreements. Req: Bach deg in any field & 2 yrs exp as chief executive in broadcast co. Forward resume, to HR Dept.: Chyron Corporation, 5 Hub Drive, Melville, NY 11747.
SOFTWARE DEVELOPER sought by software co. in The Woodlands, TX. Assists in the development of pocket PC programming & database administration. Requires degree & exp. Respond by resume only to: Ms. Sandy Kelly, HR, S/K#10, Optimum Computer Solutions Inc., 24900 Pitkin Rd., Ste. 320, The Woodlands, TX 77386.
NETWORK ADMINISTRATOR. Must have BS in Comp. Sci or Computer Info. Sciences or equiv. + 2yrs exp. in SAN/NAS Environment: EMC Systems-Symmetrix, Clariion FC5700/FC4700-2,Celerra File Server (Datamover 507/510), DS16B Fiber Switches - trouble shooting of SUN Servers and Administration of Real Professional Server 7.0 and Windows Media Server 4. Send resume: M. Fischetti, Telefonica Data USA, 1111 Brickell Av, 10th floor, Miami, FL 33131.
PROGRAMMER ANALYST needed w/Bach or foreign equiv in Comp Sci or Engg or Math & 1 yr exp to analyze, dvlp & implmt Web based s/ware solutions using ASP, Java, JavaScript, MS SQL Server, T-SQL/PL SQL, Oracle, VB, VB Script, Visual Studio, Dream weaver & HTML. Perform store procedures, triggers, views & database functions. Mail res to: Jen NY Inc. d/b/a Discount Tickets, 213 West 35th St, Ste 1201, NY, NY 10001. Job loc: New York, NY.
UNIVERSITY OF COLORADO BOULDER, Senior Scientific Software Engineer (Professional Research Associate). The Center for Spoken Language Research at the University of Colorado Boulder seeks a scientific software engineer for research and development of virtual therapists and virtual tutors that interact with students and patients through human language and character animation technologies. DUTIES: Successful candidate conducts research in areas of machine perception and synthesis and develops, integrates and tests research results in intelligent tutoring and therapy systems. The candidate will research, develop and integrate auditory and visual speech recognition, character generation and natural language understanding components to enable real time networked interaction between com-
90
Computer
puter users and 3-D computer characters. Candidate directs and conducts research in computer vision and synthesis, software architecture design, system integration and testing of fully functional interactive applications for tutoring and therapy in networked and standalone applications. Resulting systems will be aware of the user and support natural dialog interaction between the user and the virtual tutor or therapist. Responsibilities will include algorithm development, software system design, software design and coding, testing, documentation and research paper writing. MINIMUM REQUIREMENTS: PhD in Computer Science or Information Systems and 2 years of experience in the field including: (1) Machine perception: Experience in signal processing, computer vision and pattern recognition techniques such as feature extraction, pattern selection and classification. (2) 3-D character animation: Experience developing real time algorithms for 3D character animation for anatomically correct and graceful facial animation sequences. Experience with technologies for real time rendering, 3-D geometric modeling, texture mapping and surface generation. Experience using commercial 3-D character animation tools, (e.g., 3dmax or Maya). (3) Strong programming skills in: C++ (MFC), Java, OpenGL and DirectX, Tcl/Tk. Experience in programming 3dmax or Maya plug-ins and exporters. (4) Publications in the field of computer vision and graphics are required. Qualified candidate will be a self-starter and team player. Salary is commensurate with experience. Employment will take place at the University of Colorado at Boulder in The Center for Spoken Language Research, 3215 Marine Street, Boulder, Colorado. To apply, please send cover letter and resume to: The Center for Spoken Language Research, University of Colorado at Boulder, The Center for Spoken Language Research, 3215 Marine Street, Boulder, Colorado 80309. Contact: Kathleen Marie Kryczka, Email: kathleen.kryczka@ colorado.edu , Phone: (303) 735-5276, Fax: (303) 735-5072.
SOFTWARE ENGINEER. Participate in the analysis, design, development and testing of client solutions. MS in Computer Science req. Send ad w/resume to: OSI Consulting; 5950 Canoga Ave. #300; Woodland Hills, CA 91367.
APPLICATION INTEGRATION ENGINEER wanted by Tekvoice Communications Inc. Must have working knowledge of Voice over IP protocols & applications, PBX, Asterisk IP - PBX, PERL, JAVA, PHP, C Programming as related to Asterisk. Web
Programming, advanced Linux (GENTOO) & UNIX administration knowledge required. Must have min. 4 years of PBX, Computer & IT experience. Comp salary offered. Mail resume to: M.Koroschetz, 8245 N.W. 36 Street, Ste. 3, Miami, FL 33166.
DATABASE ADMINISTRATOR sought in Houston, TX to design & administer business database systems. Req'd degree in MIS or Comp. Sc. Respond by resume only to: M. Chang, #H/Y-10, ChangSheng, Inc., 10402 Harwin Dr., Houston, TX 77036.
COMPUTER ENGINEER (Staten Island, NY). Research, design, develop & test operating sys, compilers & netwk distribution for communications applications. Set operational specs, formulate & analyze sf/ware & hd/ware requm'ts. 2 yrs exp req + BS in CS Fax resume to (718) 494-8659 Citygear Wireless, Inc. Attn: Mr. Danny Shin.
NORTH CAROLINA STATE UNIVERSITY, Department Head, Department of Computer Science. The College of Engineering at North Carolina State University (NCSU) invites nominations and applications for the position of Department Head of Computer Science. The successful candidate will maintain the strong research and educational programs currently existing in the department, and will provide the leadership and vision to elevate its stature. The new Head should possess a demonstrated ability for leadership, management, administration, and effective communication with all stakeholders. The successful candidate will have a distinguished record and commitment to quality research, teaching, professional activities and overall excellence in academic scholarship. The new Head is expected to guide the department in directions that will enhance the success of its students, faculty, and staff as well as meet state and global needs. The Department has 41 faculty members and a student body composed of about 800 undergraduate and 360 graduate students. Degrees offered include the BS, MS, Master of Computer Science, Master of Science in Computer Networking, and PhD degrees. The department’s total annual research expenditures average between $5-6 million. Of the 22 new faculty hired in the past 10 years, 14 have received NSF CAREER awards. Within a year, the Department will be relocating to a new engineering building, which will also house the Department of Electrical and Computer Engineering. The new
building is located on NCSU’s Centennial Campus, which also provides offices and laboratories for corporate and public organizations. Areas of research strength within the Department include artificial intelligence, computer networking, operating systems, software engineering, and theory. In addition, the department’s applied and multidisciplinary research strengths include bioinformatics, scientific computation, e-commerce and data mining. More information about the Department’s research and an expanded version of this ad can be found at www.cs.ncsu.edu. Qualifications for the position include an earned doctorate degree in Computer Science or a relevant discipline; an excellent record of scholarly and educational accomplishments; a demonstrated ability to attract and manage external research funding; and strong leadership skills. The successful candidate is expected to be appointed to the rank of Professor, and to assume the departmental Head position on July 1, 2006. Nominations and applications should include a professional resume and at least four appropriate references. To ensure full consideration, applications must be received by August 1, 2005; however, applications will continue to be accepted until the position is filled. Please send nominations or applications to: Chairperson, Computer Science Head Search Committee, Box 7904, NC State University, Raleigh, NC 27695-7904. Inquires may be sent via E-Mail to jhsharpe@ ncsu.edu. North Carolina State University is an equal opportunity/affirmative action employer (OEO, AA) and welcomes all persons without regard to sexual orientation. Individuals with disabilities desiring accommodations in the application process should contact the committee [919-515-9952,
[email protected]].
WEB DEVELOPER needed w/Bach or foreign equiv. in Comp Sci or Engg or Math & 2 yrs exp to dev web applic using Java, Websphere applic. server, VB, ASP, ASP.Net & C#, DB2, MS SQL Server & Visual Studio.Net on Win platform. Send res to: Soft Tech Source, A Div. of Ramesh Sarva CPA, P.C. 109-17 72nd Rd, Ste # 6R, Forest Hills, NY 11375 Job loc: Forest Hills, NY or in any unanticipated loc in US.
NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, Information Technology Laboratory, Gaithersburg, MD, Computer Scientist, ZP-1550-IV, OR Computer Engineer, ZP-0854-IV, OR IT Specialist, ZP-2210-IV. The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) is seeking highly qualified individ-
uals with broad research interests and experience in academia, industry, and government to serve on its Advanced Technology Council, charged with formulating a coherent technical vision for the laboratory in national priorities and, together with ITL Division Chiefs, guiding its implementation. Of special interest are: cyber security, software quality assurance, knowledge management, nanocomputation, information handling, usability and access, biometrics, nextgeneration Internet technologies, and the healthcare information network. Potential applicants can look at ITL's web site at www.itl.nist.gov to view the laboratories current research and technology portfolio. Applicants may qualify based upon one year of experience in research or technical leadership positions equivalent to at least the GS-12 (ZP-III at NIST) in computer science or a related field of science or engineering (PhD is preferred). Full time employment is preferred, but other options are available. Appointments are at the ZP-IV level (equivalent to GS13/14); salary range $74,782-$114,882. If interested, please send a copy of your resume with a description of your research interests, goals and an Applicant Supply Cover Sheet (word format) at
www.itl.nist.gov/itl-opportunities.html via email to
[email protected]. For questions, contact
[email protected]. US citizenship is required. The Department of Commerce/NIST is an Equal Opportunity Employer.
DATABASE ANALYST sought by import/export co in NYC. Resp. for data analysis & prgm implmtn; create & dsgn media, packaging & mktg prgms & multimedia prgms for e-commerce. Must have min BS in Info Tech or equiv. Must be fluent in Korean. M-F. 9-5. Resume to: Personnel, David & Young Co., Inc., 28 W. 27th St, NY, NY 10001.
COMPUTER AND INFORMATION SYSTEMS MANAGER sought by Software developing firm in Pleasanton, CA. Position requires Bachelor's degree in Computer Engineering or its foreign equivalent, plus 5 years post-baccalaureate experience, or a Master's degree in Computer Engineering or its foreign equivalent, plus 3 years post-baccalaureate experience. Send resume to HR,
Powerful Minds Attract
Looking for a career on the cutting edge of technology? People here love their work because they get to think big and dream big. Combine your skills with our resources – and really take off. Redmond, WA Software Engineers, Program Managers, Architects, Database Administrators, Systems Administrators, Directors, Hardware Engineers (Electrical, Design, VLSI, Verification), Software Localization Engineers, Support Engineers, Support Analysts, Support Professionals, Support Managers, Systems Analysts and Consultants, Systems Engineers, and Usability Engineers Technical Writers and Editors, Product Designers, Marketing Managers and Product Managers Chicago, IL Field Support Engineers Mountain View, CA Software Engineers, Solutions Specialists, Systems Administrators, Release Engineers, Marketing Managers San Francisco, CA Developer Evangelists Reno, NV Program Managers To submit your resume, please visit our website at http://www.microsoft.com/careers or mail your resume to Microsoft Corporation, Attn: Microsoft Staffing – A428, One Microsoft Way, Redmond, WA 98052. Microsoft offers an excellent benefits package to full-time employees, including medical, dental, vacation, employee stock purchase plan, and 401(k). All part of our commitment to our most important assets – our employees. Please visit our company website for more details.
microsoft.com/careers ©2005 Microsoft Corporation. All rights reserved. Microsoft is a registered trademark of the Microsoft Corporation in the United States and/or other countries. Microsoft is an equal opportunity employer and supports workplace diversity.
June 2005
91
IT PROJECT ADMINISTRATOR: Jersey City, NJ: for Global Logistics & F/Fwdg. Co. to implement & support bus. process for CIEL users; Analyze KN bus.proc., sytems, risks, hardware, & user training to determine feasibility of new/upgraded prgms; Document new features & changes for sys. updates; Monitor KN track/trace sys. for global customers & web bookings to ensure compatibility & efficiency; Provide user support and training in AES/Customs; Test and troubleshoot. Must have Proj. Mgmt. Cert. & 3 yrs exp in Global Imp/ Exp. Mail Resume: Attn; NYC RV/S, Kuehne & Nagel Inc., 10 Exch. Pl, 19 FL, Jersey City, NJ 07302.
CSWL, inc., 4637 Chabot Drive, Suite 240, Pleasanton, CA 94588.
DATABASE ADMINISTRATOR sought in Houston, TX to design & administer business database systems. Requires degree in MIS or CIS. Respond by resume only to: Mr. H. Lakhany, #M/C-10, A. Lakhany International, Inc., 3730 West Chase Dr., Houston, TX 77042.
ERICSSON, a global leader in telecommunications, has the following positions open in Plano, Texas. These positions require a degree (BS/MS) and prior experience. Software Engineer - duties to include telecom products and solutions and support for network systems, cellular systems, or software, as well as specific products/technologies required; refer to job code: SFTJRN001. Systems Engineer duties to include Radio Frequency and optimization on high-capacity networks in GSM or TDMA, as well as specific products/technologies required; refer to job code: SYSJRN001. Applicants may e-mail resumes to:
[email protected]. Please reference job code sought. EOE
works, 347 Elizabeth Ave., Suite #100, Somerset, NJ 08873. Use job code SWE/LC.
SOFTWARE ENGINEER wanted in Somerset, New Jersey to research, design/ develop systems for remote access to Unix-based shared software applications to support web-based virtual private network functionality. Work with variants of Unix operating systems, TCP/IP socket for network programming, C++ & Perl. Requires M.Sc. in Comp.Sc., Electronics, Eng., or Physics and at least 5 yrs exp in job offered. Mail resumes to AEP Net-
Online Advertising Are you recruiting for a computer scientist or engineer? Submission Details: Rates are $195.00 for 30 days. Send copy to: Marian Anderson IEEE Computer Society 10662 Los Vaqueros Circle Los Alamitos, California 90720-1314; phone: + 1 714.821.8380; fax: +1 714.821.4010; email:
[email protected].
http://computer.org
ADVERTISER / PRODUCT INDEX JUNE 2005 Advertiser
Page Number
D.E. Shaw & Company
89
eScience and Grid Technologies 2005
Cover 3
Hot Chips 17
5
IEEE Computer Society Membership IPDPS 2006
54-56 Cover 4
ISM 2005
17
John Wiley & Sons, Inc.
Cover 2
McMaster University
88
Microsoft Corporation
91
Requirements Engineering Conference 2005 Classified Advertising
Advertising Personnel
Marion Delaney IEEE Media, Advertising Director Phone: +1 212 419 7766 Fax: +1 212 419 7589 Email:
[email protected]
Sandy Brown IEEE Computer Society, Business Development Manager Phone: +1 714 821 8380 Fax: +1 714 821 4010 Email:
[email protected]
Marian Anderson Advertising Coordinator Phone: +1 714 821 8380 Fax: +1 714 821 4010 Email:
[email protected]
8 88-92 Advertising Sales Representatives
Mid Atlantic (product/recruitment) Dawn Becker Phone: +1 732 772 0160 Fax: +1 732 772 0161 Email:
[email protected] New England (product) Jody Estabrook Phone: +1 978 244 0192 Fax: +1 978 244 0103 Email:
[email protected] New England (recruitment) Robert Zwick Phone: +1 212 419 7765 Fax: +1 212 419 7570 Email:
[email protected] Connecticut (product) Stan Greenfield Phone: +1 203 938 2418 Fax: +1 203 938 3211 Email:
[email protected]
92
Computer
Midwest (product) Dave Jones Phone: +1 708 442 5633 Fax: +1 708 442 7620 Email:
[email protected] Will Hamilton Phone: +1 269 381 2156 Fax: +1 269 381 2556 Email:
[email protected] Joe DiNardo Phone: +1 440 248 2456 Fax: +1 440 248 2594 Email:
[email protected] Southeast (recruitment) Thomas M. Flynn Phone: +1 770 645 2944 Fax: +1 770 993 4423 Email:
[email protected]
Midwest/Southwest (recruitment) Darcy Giovingo Phone: +1 847 498-4520 Fax: +1 847 498-5911 Email:
[email protected]
Northwest/Southern CA (recruitment) Tim Matteson Phone: +1 310 836 4064 Fax: +1 310 836 4067 Email:
[email protected]
Southwest (product) Josh Mayer Phone: +1 972 423 5507 Fax: +1 972 423 6858 Email:
[email protected]
Southeast (product) Bill Holland Phone: +1 770 435 6549 Fax: +1 770 435 0243 Email:
[email protected]
Northwest (product) Peter D. Scott Phone: +1 415 421-7950 Fax: +1 415 398-4156 Email:
[email protected]
Japan Tim Matteson Phone: +1 310 836 4064 Fax: +1 310 836 4067 Email:
[email protected]
Southern CA (product) Marshall Rubin Phone: +1 818 888 2407 Fax: +1 818 888 4907 Email:
[email protected]
Europe (product/recruitment) Hilary Turnbull Phone: +44 1875 825700 Fax: +44 1875 825701 Email:
[email protected]
PRODUCTS
Converged Access Device Integrates Voice, Data, and Video
AMD Introduces Dual-Core Opteron Processors
Converged Access Point, from Converged Access, enables small and midsized businesses to integrate separate voice, data, and video networks in a single broadband WAN. The product is designed to combine high application performance with secure Internet and mobile connectivity in a cost-effective, compact footprint. CAP consolidates end-to-end QoS, layer 7-aware traffic management, efficient bandwith utilization, businessclass security, legacy voice and fax support, toll-quality VoIP, integral VPN, IP routing, multiport Ethernet switching, and an optional IEEE 802.11b/g wireless access point. It can be used will all major broadband technologies, including DSL, cable, T1, Ethernet Metro, IP VPN, and Frame Relay. Converged Access Point pricing starts at $850; www.convergedaccess. com.
Advanced Micro Devices has launched its first dual-core x86 64-bit chip for 2P, 4P, and 8P servers and workstations. AMD Opteron 200 and 800 series processors, made using 90nm process technology, improve performance by up to 90 percent over single-core AMD Opteron processors according to the company. Each series comes in three models that reach speeds of 1.8, 2.0, and 2.2 GHz, respectively. The AMD Opteron 200 series starts at $851, and the Opteron 800 series starts at $1,514, in 1,000-unit quantities; www.amd.com.
Azul Compute Appliance Offers Network-Attached Processing Azul Systems has introduced Azul Compute Appliance, a networkattached-processing system for virtualmachine-based applications that run on IBM WebSphere, BEA WebLogic, and JBoss and other open source J2EE platforms. Multiple appliances can form seamless “compute pools” allowing tens to hundreds of applications to share a common managed resource without any application-level modificications, binary compatibility requirements, or OS dependencies. Azul Compute Appliance is available in three configurations with 96, 192, and 384 processor cores, with prices ranging from $89,000 to $799,000; www.azulsystems.com.
Mobility System Software Now Supports Cisco Systems APs As part of its Open Access Point Initiative, Trapeze Networks has upgraded its Mobility System Software to enable interoperability with Cisco Aironet 350, 1100, and 1200 series access points. Wireless clients can now become part of a mobility domain comprised of Trapeze switches and Cisco and Trapeze APs, enjoying all the benefits of identity-based networking including secure roaming, 802.1X authentication, per-client
authorization, and accounting support. Support for Cisco Aironet APs is a free Mobility System Software upgrade; www.trapezenetworks.com.
ASP.NET Authoring Tool from Eiffel Software Eiffel Software has released a free tool for developing dynamic, high-performance ASP.NET Web sites and applications using the Eiffel object-oriented language. The product runs with the latest Eiffel compiler on both versions 1.0 and 1.1 of the Microsoft .NET Framework. Eiffel for ASP.NET is available for downloading at www. eiffel.com.
New HD T1/E1 Boards from GL Communications GL Communications has released a new generation of high-density T1/E1 boards that can process hundreds of channels or timeslots simultaneously using DMA and a 32-bit-wide bus. CPU utilization is negligible, and the upgraded FPGA can handle specialized functions with a single load mechanism. The new HD 1/E1 cards cost $5,900 each; www.gl.com.
Please send new product announcements to
[email protected].
Converged Access Point integrates separate voice, data, and video networks in a single broadband infrastructure. Published by the IEEE Computer Society
June 2005
93
BOOKSHELF
arallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers, 2nd ed., Barry Wilkinson and Michael Allen. This nontheoretical, highly accessible text, which is linked to real parallel programming software, provides a practical approach to techniques that students can use to write and evaluate their parallel programs. Supported by the National Science Foundation and exhaustively classtested, this book concentrates exclusively on parallel programs that can be executed on networked workstations using freely available parallel software tools rather than requiring access to a special multiprocessor system. This second edition has been revised to incorporate a greater focus on cluster programming, which has become more widespread with the availability of low-cost computers. Prentice Hall PTR; www.phptr.com; 0-13-140563-2; 496 pp.; $83.00.
P
ata Mining and Knowledge Discovery with Evolutionary Algorithms, Alex A. Freitas. This book integrates two areas of computer science: data mining and evolutionary algorithms. Both areas have become increasingly popular in the last few years, and their integration is currently the subject of active research. The author emphasizes the importance of discovering comprehensible, interesting knowledge that can be potentially useful for intelligent decision making. He observes that applying evolutionary algorithms to data mining leverages their robust search methods to perform a global search in the space of candidate solutions. In contrast, most rule-induction methods perform a local, greedy search in the space of candidate rules. Intuitively, the global search of evolutionary algorithms can discover interesting rules and patterns that the greedy search would miss. Springer; www.springeronline.com; 3-540-43331-7; 265 pp.; $63.95.
D
94
Computer
nderstanding Your User: A Practical Guide to User Requirements, Methods, Tools, and Techniques, Catherine Courage and Kathy Baxter. Many companies employ a user-centered design process, but for most companies, usability begins and ends with the usability test. Although usability testing is a critical part of an effective user-centered life cycle, it provides only one component of the UCD process. This book focuses on the requirementsgathering stage, which often receives less attention than usability testing but is equally important. This book focuses on the user-requirements-gathering stage of product development and provides techniques that may be new to usability professionals. For each technique, readers will learn how to prepare for and conduct the activity and how to analyze and present it. Because each method provides different information about users and their requirements, the techniques can be used together to form a complete picture of the users’ requirements or separately to address specific product questions. Morgan Kaufmann; www.mkp.com; 1-55860-935-0; 704 pp.; $59.95.
U
gile Development with ICONIX Process: People, Process, and Pragmatism, Doug Rosenberg, Matt Stephens, and Mark Collins-Cope. This book describes how to apply the ICONIX Process—a minimal, use-casedriven modeling process—in an agile software project. It offers practical advice for avoiding common agile pitfalls and defines a core agile subset that can help readers avoid spending years learning to do agile programming.
A
Published by the IEEE Computer Society
The book follows a real-life .NET/C# project through several iterations from inception and UML modeling to working code. Readers can then go online to compare the finished product with the initial set of use cases. The authors also introduce several extensions to the core ICONIX process, including combining test-driven development with upfront design to maximize both approaches, using Java and JUnit examples. Apress; www.apress.com; 1-59059464-9; 261 pp.; $39.99. n Introduction to Programming with Mathematica, 3rd ed., Paul Wellin, Richard Gaylord, and Samuel Kamin. The authors of this book introduce the Mathematica programming language to a wide audience. Since the book’s initial edition, significant changes have occurred in Mathematica and its use worldwide. Keeping pace with these changes, the updated version of this book includes new and revised chapters on numerics, procedural, rule-based, and frontend programming, and it covers the latest features up to and including Mathematica 5.1. Mathematica notebooks, available from the publisher, contain examples, programs, and solutions to the book’s exercises. Additionally, material to supplement later versions of the software will be made available. This text can help scientific students, researchers, and programmers deepen their understanding of Mathematica. It should also appeal to those keen to program using an interactive language containing programming paradigms from all major programming languages. Cambridge University Press; www. cambridge.org; 0-521-84678-1; 570 pp.; $70.
A
Editor: Michael J. Lutz, Rochester Institute of Technology, Rochester, NY; mikelutz@mail. rit.edu. Send press releases and new books to Computer, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; fax +1 714 821 4010;
[email protected].
ENTERTAINMENT COMPUTING
Utility Model for On-Demand Digital Content
We expect the proposed model to • enhance the consumption of digital media content; • provide easier access to archived content; • promote innovation in the development of newer hardware, software, and devices related to various aspects of content delivery and consumption; and • generate demand for newer and novel content-related services and applications.
S.R. Subramanya, LGE Mobile Research Byung K. Yi, LG Electronics
Before developers can implement the proposed model, however, they must
R
apid and converging technological advances in computing, communications, and consumer electronics have caused an explosion in the generation, processing, storage, transmission, and consumption of enormous amounts of digital content. Analysts expect these trends to accelerate in the near future and to have profound effects on content delivery and consumption. The term content generally refers to movies, songs, news, or educational material that spans various application domains, including personal entertainment, business, and education and training. The technology model currently used to supply this content seems inefficient and ineffective today. Newer models must be developed to support the ubiquitous, transparent, and costeffective on-demand delivery of digital content, as driven by user preferences. We propose a content-utility system model for digital content delivery and consumption that resembles traditional utilities such as electrical power and water supply systems. Several recent trends demonstrate the emergence of the utility model in the entertainment sector. For example, Apple Computer’s hugely popular ondemand audio service, iTunes, provides consumers with free 30-second pre-
Only a robust and scalable model can handle the vast amounts of content today’s consumers demand.
views and selective downloading of music tracks for a reasonable fee. This service has created a demand for new hardware such as the iPod, iPod mini, and iPod shuffle content-consumption devices. Five film studios—MGM, Paramount, Universal, Warner Bros., and Columbia—have backed initiatives to put 100 of their recent releases on Web sites as downloads that cost from three to five dollars and self-erase after one viewing. Time Warner recently introduced a digital cable service that features movies-on-demand through its interactive iCONTROL service, which provides instant access to hundreds of movies anytime. These trends indicate the necessity and success of a well-designed contentutility model. Such a model defines the roles and interactions of the contentutility system’s components and subsystems and also identifies the need for policies related to the system’s overall functioning. Published by the IEEE Computer Society
address several important issues.
CURRENT TECHNOLOGY To date, the most common means of content delivery to consumers have been print media, broadcast TV, cable, radio, and, more recently, the Internet. Our focus is on digital video content, which can be broadly classified as either online or offline delivery. The most common forms of online delivery systems are broadcast, cable, and satellite-dish systems. Packaged content, such as CD-ROMs and DVDs, provide the most common forms of offline delivery. Both online and offline delivery schemes have several drawbacks. The major limitations of online delivery include the following: • push delivery with no interactivity, • rigid schedules fixed by the sender, • fixed viewing locations, • fixed viewing devices, June 2005
95
Entertainment Computing
Service providers Content providers
Content services Storage providers
Consumers
Communications providers
Figure 1. Content-utility system. The main entities include content providers, service providers, and consumers.
• fixed tariffs independent of actual usage, and • fixed content not tailored to individual users. Even pay-per-view and video-ondemand systems have several limitations. The major disadvantages of offline, packaged content are • the entire prepackaged content must be purchased, with no selective purchase of tracks or scenes allowed; • the difficulty of transporting an entire collection makes it essentially nonportable; and • the actual media use may be sporadic because the user seldom accesses the entire content, and a major portion of the content might be used only once. These drawbacks stem from inherent rigidities in the delivery and consumption model, which its developers based on older technologies and infrastructures. Developers can now leverage emerging technologies and techniques to create a new model that provides content on demand to subscribers anytime, anywhere, for consumption on a selection of devices in a variety of modes supporting user-specified preferences.
CONTENT-UTILITY SYSTEM MODEL Conventional electrical power or water supply utility systems have complex and capital-intensive mechanisms for generating, processing, storing, distributing, and sharing system resources. 96
Computer
However, they hide their inherent complexities from the consumer. A traditional utility system can be characterized as offering availability, transparency, tamper-proof mechanisms, and fair billing. The content-utility system we propose would have similar characteristics. Currently, no digital contentutility system incorporates this model.
System overview Figure 1 shows the three main entities in our proposed content-utility system: content providers, service providers, and consumers. The content providers are responsible for creating content, processing it to conform to certain formats, and developing the content metadata. The service providers conglomerate several distinct entities and facilitate effective and easy content consumption. These entities include storage providers, who, broadly, provide and manage content storage; communications providers, who transfer content over communications and data networks swiftly and seamlessly; and content services, which provide facilities for easy and effective content consumption. The consumers use a variety of devices for rendering the delivered content, including devices ranging from HDTV to cell phones. Figure 2 shows the major operational outline of the process for providing content to consumers. The system first captures/generates the content. It then processes and packages the generated content (which involves editing, formatting, encod-
ing, and so on), then stores it (after generating indices) for online access. It also incorporates appropriate content-protection mechanisms such as digital watermarking and encryption to prevent misuse at storage, during transmission, or during consumption. The system then delivers the content to consumers swiftly and economically, while taking into consideration consumer preferences.
Major subsystems Our content-utility system model’s major subsystems and their key functionalities include the following: • content management—handles format conversions, performs content analysis for deriving metadata and providing protection; • storage management—deals with issues such as placement and distribution of content on disks, request scheduling, cache management, and compression; • communications management— determines the most efficient means of transmitting content over heterogeneous networks, channel coding, and managing quality-of-service parameters; • content services management— provides a variety of basic and enhanced services; • digital rights management—monitors the appropriate consumption of content per the usage rights specified in the license associated with the user and content; • user profile management—maintains user profiles and preferences and facilitates customization of content and focused advertisements; and • billing and payment management—addresses billing issues, tracks payments, and determines royalties for content creators and providers.
SYSTEM FEATURES Our content-utility system model provides several significant benefits.
Content-consumption flexibility Consumers can choose to view contents of interest at times and places they find suitable using a variety of viewing devices such as a TV, desktop computer monitor, laptop, PDA, or cell phone. Users can pause the programs at anytime, then resume viewing later from the same device and location or from a different one.
Generation (raw content)
Processing and packaging
Storage (for online access)
Content protection
Transmission and distribution
Content consumption
Figure 2. Major steps in the delivery of content from provider to consumer.
High interactivity Consumers can have frequent and meaningful interactions with the content-utility system for a variety of purposes: • requesting programs at certain times, at specific locations, and on certain devices; • sending control signals to browse, play, stop, fast-forward, pause, and rewind; • providing perceptual and content feedback on program quality; • setting user preferences; and • following up on advertisements of interest.
Relative reduction in data replication Several factors will reduce data replication, including the availability of powerful, cost-effective services that offer customized content anytime and anywhere. On the other hand, the overhead of storing and managing huge amounts of content and the lack of services for local content will discourage local storage and greatly reduce local-content replication.
Content services An important advantage of our proposed model is the availability of several services that enhance content usage and enrich the user’s experience. These services, which generally are not available in content stored on personal media, include content directories and browsing, content-based query and search, and usage billing. Enhanced services include customized content delivery and viewing preferences,
cross-referencing, and user-specified special effects.
Minimal burden on consumers In the model we propose, there may be little or no need for purchasing enormous amounts of packaged content because most of this data will be available on demand. This minimizes the need for storing and managing physical media and tracking specific items of interest such as audio tracks and video clips. As an added convenience, a user could program a virtual set-top box universal interface to manage the subscribed services, then periodically reprogram it to reflect upgrades or changes in service plans.
Cost amortization Appropriate amortization results in significantly reducing the cost of content delivery and services. The perception of reasonable cost and fair billing for content use will be a significant factor in reducing piracy.
MAJOR ISSUES A few of the major generic issues that must be addressed when providing content on demand and transparently are • storage management—dealing with issues such as placing and distributing content on servers, I/O scheduling, content caching techniques, and supporting a variety of storage formats; • content indexing, query, and search—deriving metadata and providing support for browsing
and content-based queries and search; and • content transmission—sending data over heterogeneous networks using different protocols and standards while adhering to various quality-of-service requirements. Several of the major issues specific to the content-utility model that must be addressed follow.
Tariff structure Like conventional utility systems, the tariff structure should be profitable to the various providers in the chain from production through consumption. At the same time, it should be fair and affordable for consumers. The tariff structure must also be flexible and scalable to support newer devices, services, and content modes, since these will likely change rapidly. With enormous amounts of content available, it may be cheaper and more attractive for consumers to rent it than to own it. Therefore, content leasing and renting models must be developed.
Billing and payment This system should support innovative pricing and payments such as micropayments. The payment mode could be prepaid, depending on the content and quality options; valuebased, with a fixed price for the content; volume-based, with the price linked to the amount of content consumed; or subscription-based, with the price fixed regardless of the types and amount of content consumed. June 2005
97
Entertainment Computing
Billing could be based partly on the amount of bits consumed so that, for example, a movie viewed on a cell phone would cost much less than one viewed on an HDTV system.
Content customization Several content-customization schemes must be supported, such as delivery of content with user-created annotations, notes, remarks, ratings, and special effects, and maintenance and updates of user-created albums. Suitable tools for specifying and managing user profiles, and the means for actually customizing the content, also must be developed.
Digital rights management A fair and robust digital rights management system must be developed. The system should support various modes of usage and granularities of content, provide specifications for suitable content usage rights, monitor content usage while respecting privacy issues, offer methods for establishing the authenticity of rendering devices,
and provide methods for determining appropriate remuneration for content creators and providers.
Content protection and privacy As content becomes more pervasive, several points of vulnerability will leave it open to misuse. Adequate protection mechanisms must be built in, without adding excessive costs. This requires developing appropriate policies and technologies. While deterring unauthorized users, these measures should also provide anonymity and privacy to legitimate consumers.
Because consumers now depend heavily on the online content available on demand, the content-utility system should have an availability level comparable to water and electrical utilities systems. This requires designing appropriate mechanisms for fault tolerance, graceful degradation, and quick recovery. In addition, the system should also ensure the content’s integrity.
to individual IEEE Computer Society documents online. More than 100,000 articles and conference papers available! $9US per article for members $19US for nonmembers
www.computer.org/publications/dlib
Computer
T
S.R. Subramanya is a senior research scientist at LG Electronics Mobile Research. Contact him at subra@lge. com.
High availability
Get access
98
echnological advances and the convergence of computing, communications, and consumer electronics have begun to profoundly affect the delivery and consumption of content. This trend has fostered the need for a robust and scalable contentutility model that considers several dimensions, including technology, economics, and usability. ■
Byung K. Yi is a senior executive vice president at LG Electronics. Contact him at
[email protected].
Editor: Michael Macedonia, Georgia Tech Research Institute, Atlanta;
[email protected].
INVISIBLE COMPUTING
Programmable Matter
tain a 3D shape, • communicate with other catoms in an ensemble, and • compute state information with possible assistance from other catoms in the ensemble.
Seth Copen Goldstein, Carnegie Mellon University Jason D. Campbell, Intel Research Pittsburgh Todd C. Mowry, Intel Research Pittsburgh and Carnegie Mellon University
I
n the past 50 years, computers have shrunk from room-size mainframes to lightweight handhelds. This fantastic miniaturization is primarily the result of high-volume nanoscale manufacturing. While this technology has predominantly been applied to logic and memory, it’s now being used to create advanced microelectromechanical systems using both top-down and bottom-up processes. One possible outcome of continued progress in high-volume nanoscale assembly is the ability to inexpensively produce millimeter-scale units that integrate computing, sensing, actuation, and locomotion mechanisms. A collection of such units can be viewed as a form of programmable matter.
CLAYTRONICS The Claytronics project (www.cs. cmu.edu/~claytronics) is a joint effort of researchers at Carnegie Mellon University and Intel Research Pittsburgh to explore how programmable matter might change the computing experience. Similar to how audio and video technologies capture and reproduce sound and moving images, respectively, we are investigating ways to reproduce moving physical 3D objects. The idea behind claytronics is neither to transport an object’s original instance nor to recreate its chemical composition, but rather to create a physical artifact using programmable matter that will eventually be able to mimic the original object’s shape,
In the preliminary design, each catom is a unit with a CPU, a network device, a single-pixel display, one or more sensors, a means of locomotion, and a mechanism for adhering to other catoms. Although this sounds like a
Researchers are using nanoscale technology to create microrobot ensembles for modeling 3D scenes. movement, visual appearance, sound, and tactile qualities.
Synthetic reality One application of an ensemble, comprised of millions of cooperating robot modules, is programming it to self-assemble into arbitrary 3D shapes. Our long-term goal is to use such ensembles to achieve synthetic reality, an environment that, unlike virtual reality and augmented reality, allows for the physical realization of all computer-generated objects. Hence, users will be able to experience synthetic reality without any sensory augmentation, such as head-mounted displays. They can also physically interact with any object in the system in a natural way.
Catoms Programmable matter consists of a collection of individual components, which we call claytronic atoms or catoms. Catoms can • move in three dimensions in relation to other catoms, • adhere to other catoms to main-
microrobot, we believe that implementing a completely autonomous microrobot is unnecessarily complex. Instead, we take a cue from cellular reconfigurable robotics research to simplify the individual robot modules so that they are easier to manufacture using high-volume methods.
Ensemble principle Realizing this vision requires new ways of thinking about massive numbers of cooperating millimeter-scale units. Most importantly, it demands simplifying and redesigning the software and hardware used in each catom to reduce complexity and manufacturing cost and increase robustness and reliability. For example, each catom must work cooperatively with others in the ensemble to move, communicate, and obtain power. Consequently, our designs strictly adhere to the ensemble principle: A robot module should include only enough functionality to contribute to the ensemble’s desired functionality. Three early results of our research each highlight a key aspect of the ensemble principle: easy manufacturaJune 2005
99
Invisible Computing
Figure 1. Claytronic atom prototypes. Each 44-mm-diameter catom is equipped with 24 electromagnets arranged in a pair of stacked rings.
bility, powering million-robot ensembles, and surface contour control without global motion planning.
HIGH-VOLUME MANUFACTURABILITY Some catom designs will be easier to produce in mass quantity than others. Our present exploration into the design space investigates modules without moving parts, which we see as an intermediate stage to designing catoms suitable for high-volume manufacturing. In our present macroscale (44-mm diameter), cylindrical prototypes, shown in Figure 1, each catom is equipped with 24 electromagnets arranged in a pair of stacked rings. To move, a pair of catoms must first be in contact with another pair. Then, they must appropriately energize the next set of magnets along each of their circumferences. These prototypes demonstrate highspeed reconfiguration—approximately 100 ms for a step-reconfiguration involving uncoupling of two units, movement from one pair of contact points to another, and recoupling at the next pair of contact points. At present, they don’t yet incorporate any significant sensing or autonomy. The current prototypes can only overcome the frictional forces opposing their own horizontal movement, but downscaling will improve the force budget substantially. The resulting force from two similarly energized magnet 100
Computer
coils varies roughly with the inverse cube of distance, whereas the flux due to a given coil varies with the square of the scale factor. Hence, the potential force generated between two catoms varies linearly with scale. Meanwhile, mass varies with the cube of scale. These relationships suggest that a 10-fold reduction in size should translate to a 100-fold increase in force relative to mass. Energy consumption and supply will still be an issue, but given sufficient energy, smaller catoms will have an easier time lifting their own weight and that of their peers, as well as resisting other forces involved in holding the ensemble together. Our finite-element electromagneticphysical simulations on catoms of different sizes appear to confirm this approximation and closely match our empirical measurements of magnetic force in the 44-mm prototypes. We’re also studying programmable nanofiber adhesive techniques that are necessary to eliminate the static power drain when robots are motionless, while still maintaining a strong bond.
POWERING MICROROBOT ENSEMBLES Some energy requirements, such as effort to move versus gravity, scale with size. Others, such as communication and computation, don’t. As microrobots (catoms) are scaled down, the onboard battery’s weight
and volume exceed those of the robots themselves. To provide sufficient energy to each catom without incurring such a weight and volume penalty, we’re developing methods for routing energy from an external source to all catoms in an ensemble. For example, an ensemble could tap an environmental power source, such as a special table with positive and negative electrodes, and route that power internally using catom-tocatom connections. To simplify manufacturing and accelerate movement, we believe it’s necessary to avoid using intercatom connectors that can carry both supply and ground via separate conductors within the connector assembly. Such complex connectors can significantly increase reconfiguration time. For example, in previously constructed modular robotic systems such as the Palo Alto Research Center’s PolyBot (www2.parc.com/spl/projects/ modrobots/chain/polybot/index.html) and the Dartmouth Robotics Lab’s Molecule (www.cs.dartmouth.edu/ ~robotlab/robotlab/robots/molecule/ index.html) it can take tens of seconds or even minutes for a robot module to uncouple from its neighbor, move to another module, and couple with that newly proximal module. In contrast, our present unary-connector-based prototypes can “dock” in less than 100 ms because no special connector alignment procedure is required. This speed advantage isn’t free, however: A genderless unary connector imposes additional operational complexity in that each catom must obtain a connection to supply from one neighbor and to ground from a different neighbor. Several members of the Claytronic team have recently developed power distribution algorithms that satisfy these criteria. These algorithms require no knowledge of the ensemble configuration—lattice spacing, ensemble size, or shape—or power-supply location. Further, they require no on-catom power storage.
SHAPE CONTROL WITHOUT GLOBAL MOTION PLANNING Classical approaches to creating an arbitrary shape from a group of modular robots involve motion planning through high-dimensional search or gradient descent methods. However, in the case of a million-robot ensemble, global search is unlikely to be tractable. Even if a method could globally plan for the entire ensemble, the communications overhead required to transmit individualized directions to each module would be very high. In addition, a global plan would break down in the face of individual unit failure. To address these concerns, we’re developing algorithms that can control shape without requiring extensive planning or communication. While this work is just beginning, Claytronics researchers have had early success using an approach inspired by semiconductor device physics. This approach focuses on the motion of holes rather than that of robots per se. Given a uniform hexagonal-packed plane of catoms, a hole is a circular void due to the absence of seven catoms. Such a seven-catom hole can migrate through the ensemble by appropriate local motion of the adjacent catoms. Holes migrate through the ensemble as if moving on a frictionless plane, and bounce back at the ensemble’s edges. Just as bouncing gas molecules exert pressure at the edges of a balloon, bouncing holes interact frequently with each edge of the ensemble without the need for global control. As Figure 2 illustrates, edges can contract by consuming a hole or expand by creating a hole, purely under local control. We initiate shape formation by “filling” the ensemble with holes. Each hole receives an independent, random velocity and begins to move around. A shape goal specifies the amount each edge region must either contract or expand to match a desired target shape. A hole that hits a contracting edge is consumed. In effect, the empty space that constitutes the hole moves to the
A hole approaches a contracting edge,
but rather than reflecting,
the edge consumes the hole.
creates one or more holes,
launching them into the ensemble.
(a) An edge selected to expand
(b) Figure 2. Hole motion. Edges can (a) contract by consuming a hole or (b) expand by creating a hole, purely under local control.
outside of the ensemble, pulling in the surface at that location. Similarly, expanding edges create holes and inject them into the ensemble, pushing its contour out in the corresponding local region. Importantly, all edge contouring and hole motion can be accomplished using local rules, and the overall shape of an ensemble can be programmed purely by communicating with the catoms at the edges. Hence, we use probabilistic methods to achieve a deterministic result. Our initial analyses of the corresponding 3D case suggest surface contour control will be possible via a similar algorithm.
ur initial research results suggest it may be possible to construct, power, and control large microrobot ensembles to model 3D scenes. While many difficult problems remain, successful implementation of a dynamic physical rendering system could open the door to a new era of humancomputer interface design and new applications. Economic feasibility also poses a high bar to the manufacture and deployment of multimillion-robot ensembles. However, the innovative application of high-
O
volume manufacturing techniques bridged a similarly large gulf in cost and physical scaling in computer hardware. Achieving the Claytronics vision won’t be straightforward or quick, but by taking on some of the problems associated with operating and building these ensembles, we hope to advance the state of the art in modular reconfigurable microrobotics and encourage others to undertake related research. ■ Seth Copen Goldstein is an associate professor in the Computer Science Department at Carnegie Mellon University. Contact him at seth.goldstein@ cs.cmu.edu. Jason Campbell is a senior researcher at Intel Research Pittsburgh. Contact him at
[email protected]. Todd C. Mowry is director of Intel Research Pittsburgh and an associate professor in the Computer Science Department at Carnegie Mellon University. Contact him at
[email protected]. edu.
Editor: Bill Schilit, Intel Research Seattle;
[email protected]
June 2005
101
THE PROFESSION
What’s All This about Systems?
the system doctrine, a part can be a system, and a system interacts with its environment as a whole. Hardware comes in two categories: domain hardware and computing hardware that can process software. For example, airplane wings are domain hardware. Following the conventional definition, software also comes in two categories: programs and data. Processes are patterns of activities, people carry out activities, and systems provide services.
Rob Schaaf, Software Management Corp.
A
nother turn has occurred in the way we organize our work. Recently, the Software Productivity Consortium changed its name to Systems and Software Consortium. Likewise, the IEEE’s Software Engineering Standards Committee added the word systems to its name to become the Software and Systems Engineering Standards Committee. The US Department of Defense’s Software Technology Conference morphed into the Systems and Software Technology Conference. Although each organization surely had its own reason for the name change, these transformations leave the general impression that software and software engineering alone have become limiting terms. This raises at least three general questions: • What do systems have over software? • What does systems engineering have over software engineering? • How will all this affect the profession of engineers working on systems and software? The answers to these questions point to a shift that increases the integrative powers of our engineering efforts.
SYSTEMS AND SOFTWARE In general, a system consists of a mix of ingredients. Here, the ingredients include not just software, but also hardware, people, processes, and services. Given that the problems for 104
Computer
A turn in the way we organize our work increases the integrative powers of our engineering efforts. which we must develop or acquire solutions appear to be increasingly beyond the scope of software alone, system flexibility provides a real benefit. Thus, we must look at combinations of ingredients. Software never exists in a vacuum— at the very least it needs computing hardware to run. The people using software depend on it for services. From the earliest days, there have been so-called service bureaus, which initially provided their services in batch form. Today, application service providers embody the concept of software as a service. Finally, whether highlighted as such or not, the application and business processes embedded in the software play a role. Conceptually, the system differs from the software in that the former emphasizes unity and some interchangeability among the constituent ingredients. When we analyze a system, the first cut divides it into parts that aren’t necessarily homogeneous, consisting exclusively of only one ingredient. Broadly defined, a system consists of parts that interact with or, in the case of data, relate to each other. Further, a part can consist of parts. Completing Published by the IEEE Computer Society
It isn’t possible for an army of engineers to create billions of lines of software code and similar amounts of other ingredients, nor can only a handful of engineers create millions of lines of code and other ingredients. In both cases, the amounts to be created are huge, relative to the engineering resources. Realistically then, reuse provides the dominant engineering form for huge systems, with development playing a lesser role. Systems for reuse as subsystems in huge constructs tend to be large themselves. We can more easily imagine opportunities for incorporating large systems into huge systems than we can imagine opportunities for incorporating large amounts of pure software into huge systems. This explains the system-of-systems concept’s growing popularity. Extracting the software from a large heterogeneous system for reuse is fraught with problems because it will likely have intricate interfaces to the system’s other ingredients. Some might contend that mixing hardware and software as system ingredients constitutes a step backward. Admittedly, software evolved Continued on page 102
The Profession Continued from page 104
from a hardware appendix 40 years ago to a stand-alone entity with unique and intrinsic qualities. Widespread use of certain central processing architectures and operating systems such as Unix, Linux, and Windows made this evolution possible. Then, as processing power became distributed, developers could keep the software-on-itself concept intact, first with master-slave computing, then with peer-to-peer and client-server computing, and finally with Web-based computing. Today’s emphasis on middleware should preserve the encapsulated software concept across a broad range of hardware configurations. On the other hand, the distribution of computing power is becoming ubiquitous in so many forms that, increasingly, another view will be advantageous—one in which heterogeneous systems consist of heterogeneous subsystems, which in turn consist of heterogeneous components, and so on. Even with the commoditization of processors and the quasistandardization of operating systems, hardware—now in the sense of its near-endless configurability—has once again become an engineering variable on par with software. According to this other view, software remains an ingredient of systems and their subsidiary parts. It should be clear that these two views can coexist and thus there is no step backwards. To the degree the systems school of thought constitutes a critique, it can be stated metaphorically: For the past 40 years, we have concentrated on flour and the qualities of flour, important as they are. Now, customers want sixfoot party sandwiches. To summarize, systems provide additional scope and concomitant cohesiveness over software alone.
ENGINEERING SYSTEMS AND SOFTWARE To compare systems engineering and software engineering, I use IEEE Std 1220-1998, Standard for Application and Management of the Systems 102
Computer
Engineering Process (IEEE 1220) as the reference point for systems engineering. I use the IEEE Computer Society’s Guide to the Software Engineering Body of Knowledge (SWEBOK) as the arbiter for the generally accepted software engineering practices. SWEBOK’s second chapter, Software Requirements, describes a software engineering connection with systems and systems engineering. Systems engineering elicits, analyzes, represents, and validates system requirements.
The integrated product team concept deserves extra attention because it easily can be applied in name only. Software requirements derive from system requirements through a process of allocation—the assignment of system requirements to components, a SWEBOK term hereafter replaced with elements. SWEBOK expects each element to satisfy its allocated system requirements. In allocating system requirements, software is treated as one of a system’s elements. After allocation, the analysis of an element may “discover further requirements on how the [element] needs to interact with other [elements] in order to satisfy the allocated requirements.” After allocation and analysis, software engineering may further decompose the software. Nevertheless, from a system perspective, a system’s software is seen and treated as a whole. Concerning requirements, SWEBOK recognizes three forms of representation: • a system definition, also known as a concept of system operations document, which represents the required system from the applicable domain’s perspective; • a system requirements specification—an outcome of systems engineering; and
• a software requirements specification—an outcome of software engineering. Examining software and software engineering in light of IEEE 1220’s reveals that the standard repeatedly recognizes software as a systems ingredient. However, the only place the standard explicitly touches on software engineering is in its assumption that the software organization implements IEEE’s standard for software life-cycle processes, IEEE/EIA 12207. 0-1996. Instead, IEEE 1220 centers on the systems engineering process. The repeated application of this process defines the subject system and divides the system into subsystems, down to the lowest-level components. With each iteration, the process transforms the set of requirements on a certain part into sets of requirements on the set of subparts. The process puts no premium on letting the dividing lines fall together with the distinction between the system ingredients. In summary, systems engineering transforms rather than allocates requirements, and, in principle, systems engineering places a structure of parts and subparts over ingredients.
IMPACT ON THE PROFESSION Systems engineering has always had an interdisciplinary flavor and has been associated with modeling. With systems and systems engineering on the rise, we can expect our problem solving and profession to be increasingly penetrated by organizational forms such as integrated product teams, with modeling becoming the lingua franca among members of these interdisciplinary teams. Here, the integrated product team concept deserves extra attention because it easily can be applied in name only. On the outside, an organization might be wrongly presented as an integrated product team, while in reality the organization consists of a hardware engineering group, a software engi-
How to Reach Computer neering group, and a relatively small— and even ad hoc—system requirements’ engineering group. In this nonintegrated organization, the hardware engineer’s immediate team members are also hardware engineers, while the software engineer’s team members are all likely to be software engineers. To avoid disputes over labels that we put on organizations, it’s helpful to take a look at the specific activities of a properly integrated product team. By our minimum definition, an integrated product team, working iteratively, does the following: • develops all system requirements, • decomposes the system into subsystems, • designs the interactions between the subsystems, • develops all requirements for all heterogeneous subsystems, • verifies that the subsystems’ qualities add up to the system’s required qualities, • repeats the decompose-designdevelop-verify steps for each heterogeneous part lower in the hierarchy, • constructs heterogeneous parts from subparts—or acquires these parts, • validates the constructed parts against the respective requirements, and • repeats the construct-validate steps up through and including the system level. In addition, an integrated product team performs engineering functions that cover the following areas: • product qualities such as performance, reliability, security, and ergonomics; • life-cycle processes such as manufacturing, installation, operation, and service; and • supporting processes such as project and risk management; problem, change, and configuration management; life-cycle process
development; and tool development and acquisition. Clearly, an integrated product team must deal not only with requirements but also with the broad responsibility of designing and constructing the subject system. On the other hand, the list of integrated-product team activities leaves freedom for isolated instances in which hardware, software, or process teams can work on any homogeneous parts. In such an isolated case, the specialized team might be a subsidiary of the integrated product team, or there might be an acquirer-supplier relationship between the integrated product team and the specialized team. With integrated product teams, the number of experts in various disciplines working closely together will inevitably increase. So will the level of workmanship demanded from each expert—workmanship as defined by, for example, life-cycle processes. However, the mixed environments also might present a greater risk of making the processes ends in themselves. More than ever, professionals must start from their organization’s and project’s overall goals, understand what these goals require of them individually, then commit and self-manage accordingly. roadly speaking, where in the past we have seen our profession evolve from hardware-centric to software-centric, we might now see a further evolution to system-centricity, with a concurrent shift from a development orientation to an iterative acquisition-and-assembly orientation. ■
B
Rob Schaaf is a principal of Software Management Corp. Contact him at
[email protected]. Editor: Neville Holmes, School of Computing, University of Tasmania;
[email protected]. Links to further material are at www.comp.utas. edu.au/users/nholmes/prfsn.
Writers We welcome submissions. For detailed information, visit www.computer.org/computer/ author.htm.
News Ideas Contact Lee Garber at lgarber@ computer.org with ideas for news features or news briefs.
Products and Books Send product announcements to
[email protected]. Contact
[email protected] with book announcements.
Letters to the Editor Please provide an e-mail address with your letter. Send letters to computer@ computer.org.
On the Web Explore www.computer.org/ computer/ for free articles and general information about Computer magazine.
Magazine Change of Address Send change-of-address requests for magazine subscriptions to
[email protected]. Make sure to specify Computer.
Missing or Damaged Copies If you are missing an issue or received a damaged copy, contact
[email protected].
Reprint Permission To obtain permission to reprint an article, contact William Hagen, IEEE Copyrights and Trademarks Manager, at
[email protected]. To buy a reprint, send a query to
[email protected].
June 2005
103
th h IEEE E Internationall Parallell & d Processingg Symposium Distributed Aprill – Saturday
April Tuesday Rhodess Island Greece
IPDPS CALL FOR PARTICIPATION
IPDPS serves as a forum for engineers and scientists from around the world to present their latest research findings in the fields of parallel processing and distributed computing The th IEEE International Parallel & Distributed Processing Symposium will be held in Greece on the Isle of Rhodes Situated on the Mediterranean Sea at the crossroads of two continents Rhodes holds the Greek record for sunshine and boasts a rich legacy of archaeological treasures The fiveday IPDPS program will follow the usual format of contributed papers invited speakers panels and commercial participation mid week framed by workshops held on Tuesday and Saturday For details check the IPDPS Web site at wwwipdpsorg for updates Send general email inquiries to info@ipdpsorg
IPDPS CALL FOR PAPERS
Authors are invited to submit manuscripts that demonstrate original unpublished research in all areas of parallel and distributed processing including the development of experimental or commercial systems Work focusing on emerging technologies is especially welcome Topics of interest include but are not limited to: Parallel and distributed algorithms focusing on issues such as: stability scalability and fault tolerance of distributed systems communication and synchronization protocols network algorithms and scheduling and load balancing Applications of parallel and distributed computing including web applications peerto peer computing grid computing scientific applications and mobile computing Parallel and distributed architectures including shared memory distributed memory (including petascale system designs and architectures with instructionlevel and thread level parallelism) specialpurpose models (including signal and image processors network processors other special purpose processors) nontraditional processor technologies network and interconnect architecture parallel I/O and storage systems system design issues for low power design for high reliability and performance modeling and evaluation Parallel and distributed software including parallel programming languages and compilers operating systems resource management middleware libraries data mining and programming environments and tools
WHAT TO SUBMIT
Submitted manuscripts may not exceed singlespaced pages using point size font on x inch pages including figures and tables References may be included in addition to the pages Details regarding manuscript requirements and submission procedures are available via Web access at wwwipdpsorg For those who have only email access send an email message to cfp@ipdpsorg for an automatic reply that will contain detailed instructions for submission of manuscripts If no electronic access is available contact the Program Chair at the following address: Dept of Computer Science; University of Massachusetts Amherst; Amherst MA USA Submissions will be judged on correctness originality technical strength significance quality of presentation and interest and relevance to the conference attendees Submitted papers may NOT have appeared in or be under consideration for another conference or a journal All manuscripts will be reviewed
BEST PAPER AWARDS
Awards will be given for one best paper in each of the four conference technical tracks: algorithms applications architectures and software The selected papers also will be considered for possible publication in a special issue of the Journal of Parallel and Distributed Computing
GENERAL CO-CHAIRS
Paul Spirakis CTI & Patras University HJ Siegel Colorado State University
GENERAL VICE-CHAIRS
Sotiris Nikoletseas CTI & Patras University Charles Weems University of Massachusetts Amherst
PROGRAM CHAIR
Arnold L Rosenberg University of Massachusetts Amherst
PROGRAM VICE-CHAIRS ALGORITHMS Mikhail J Atallah Purdue University APPLICATIONS David A Bader University of New Mexico ARCHITECTURES Allan Gottlieb New York University SOFTWARE Laxmikant Kale University of Illinois UrbanaChampaign
IMPORTANTT DATESS FOR IPDPSS Sponsored by IEEE Computer Society Technical Committee on Parallel Processing
wwwipdpsorg
In cooperation with ACM SIGARCH IEEE Computer Society Technical Committee on Computer Architecture and IEEE Computer Society Technical Committee on Distributed Processing
October Final Deadline for Manuscripts December Review Decisions Mailed January CameraReady Papers Due
Hosted by Research Academic Computer Technology Institute (CTI) Greece