www.cgw.com March 2010
3D Wonderland $6.00 USA $8.25 Canada
The Mac Networking Experts
Manage your workflow better with Shared Storage that everyone can afford Check out all our Ethernet Shared Storage Solutions at SM
.com SM
Visit us at NAB booth #SL7425
www.small-tree.com • 1.866.782.4622 • 866.STC.4MAC Small Tree • 7300 Hudson Blvd., Suite 165, Oakdale, MN 55128
www.GraniteSTOR.com
March 2010 • Volume 33 • Number 3
Innovations in visual computing for DCC professionals
Features COVER STORY
12
Curiouser and Curiouser!
Pictures Imageworks takes audiences down the rabbit hole for a trippy stereo 3D experience in Disney’s Alice in Wonderland, thanks to cutting-edge 12 Sony digital techniques that meld manipulated photographic elements and CG characters and environments. By Barbara Robertson
Commercial Appeal
tune in to the Super Bowl to watch the high-priced commercials, some of which employ digital effects to deliver their message. 22Many
30
By Debra Kaufman
Bleeding Edge
22
40
and scientists are rewarded for their industry contributions with Sci-Tech Oscars. 30Researchers
By Barbara Robertson
Shattered
Departments
facility breaks the mold with a stereo 3D commercial. 34AByproduction Karen Moltenbrey
Editor’s Note
In the Middle
2
Electronics Evolution
Not long ago HD TVs, which promised to deliver a theater experience to at-home audiences, were the big rage. Recently, stereo 3D TVs have nudged out HD as “the next big thing” for the home. Television manufacturers are getting ready to roll out their wares, while content creators are preparing to take advantage of the technology.
Spotlight
Luxology’s SLIK. Nvidia’s RealityServer. JVC’s IF-2D3D1 processor. 4image Rhythm & Hues tackles Alvin and the Chipmunks: The Squeakquel. Blur Studios hooks audiences with animated Products
User Focus
Goldfish spots.
Review
46Pixologic’s ZBrush. x Back Products
Recent software and hardware releases. xxxxxxxxxxxxxxxxxxxxxxxxxxxx x 48xxxxxxxx
Enhanced Content
Get more from CGW. We have enhanced our magazine content with related stories and videos online at www.cgw.com. Just click on the cover of the issue (found on the left side of the Web page), and you will find links to these online extras.
adapts to the growing interactive entertainment market. 36Middleware By Christine Arrington
Above Par
online golf title avoids the traps of typical Internet games. 40ABymultiplayer Karen Moltenbrey
Olympic AR
reality makes an appearance at the Winter Games. 44Augmented By Barbara Robertson
SEE IT IN
• Making audio for games an all-encompassing experience. • New media projects embrace social networking. • The challenges of posting reality TV.
ON THE COVER
“Wonderful” is just one of many adjectives that describe the fantastical environments, the odd-looking human characters, and the unique CG creatures that appear in the magical journey of Alice as she visits Wonderland in stereoscopic 3D. See pg. 12. March 2010
1
Editor’sNote
Electronics Evolution
I
n January, techies from around the world flocked to the bright lights of Las Vegas for the annual Consumer Electronics Show (CES) to ooh and ah over the latest and greatest electronic gizmos and devices. It is indeed the paradise of paradises for uber geeks, as companies such as Apple, Panasonic, Samsung, and a host of others unveil their wares, from cameras, to telephones, to electronic readers, and more. You name it, and if it has an “on” button and a screen, it probably was there. So, what got everyone talking? To no one’s surprise, it’s 3D TVs. Yup, 3D in your living room. Indeed, 3D for the home has been on our doorstep for some time now. The inroad was gaming. After all, gamers are at the top of the geek meter—they live for cutting-edge electronics . . . and for 3D. Capitalizing on this last year, Nvidia rolled out its GeForce 3D Vision system for turning PCs into stereo devices for gaming and home entertainment. Dynamic Digital Depth offered an affordable 3D conversion package, as well. There were also stereo monitors, including those from iZ3D, making gameplay even more exciting. This year, the show featured a host of 3D monitors, including glasses-free autostereo displays from Magnetic 3D and others. The aisles also were filled with 3D televisions. Panasonic, Samsung, Sony, Toshiba, Sharp, LG, Vizio, and others announced HD TVs that are expected to be available later in the year. Samsung and Toshiba take this one step further, claiming their televisions will actually convert 2D programs to 3D, so that all the content will be in stereo. The downside: Viewers will have to wear stereo glasses. While some claim that glasses will deter from the experience, surveys show just the opposite: Viewers want the 3D experience, and if they have to wear glasses to have it, then so be it. But, when the time comes—and it is coming, this year in fact—will they feel the same way? Some TVs will require simple, cheap (less than $10) polarized glasses, similar to those you get at movie theaters. Others utilize heavier active shutter glasses that are far more costly, about $50 each. But here’s the thing: Everyone in the family will have to put them on to watch their favorite shows, as none of the televisions previewed thus far enable dual-viewing for 2D and 3D simultaneously. So, if you are not wearing the glasses, the images on the screen will look distorted. Yes, we are all willing to don a pair of glasses to watch the latest theater action. But doing so for two hours every so many weeks is hardly comparable to putting them on every single night. No doubt stereo TVs have gotten a big push from the hype surrounding Avatar, a project that in and of itself proves why stereoscopic 3D is so alluring. In addition to Avatar, there were other 3D movies released in 2009, and when it came time for Act 2—the home release—studios are savvy enough to incorporate 3D content. Available on Blu-ray for your 3D pleasure this year: Cloudy with a Chance of Meatballs, Monsters vs. Aliens, and Disney’s A Christmas Carol—and expect more to follow. Yes, there are existing Blu-ray and DVD discs in 3D, such as Coraline and Journey to the Center of the Earth, but these are presented in the old anaglyph style—that is, 3D without the HD punch. A must-have with the 3D Blu-ray releases: a 3D TV and Blu-ray player, so it’s hardly surprising that this 3D movie rollout for the home is backed through relationships between the film studios and various 3D TV makers. Still not sold on a 3D television? On the broadcast side, DirecTV has announced three 3D channels available to customers this summer (Panasonic will be the exclusive presenting sponsor of these new HD 3D channels), while Discovery (through a joint venture with Sony and IMAX) and ESPN are embracing the trend, as well. ESPN announced at least 85 sporting events in 2010 on its new channel, starting with several FIFA World Cup matches this summer. Other events on tap: NBA and college basketball games and college football games (including the BCS National Championship match-up). And who is sponsoring these events? Sony has signed on as an official sponsor of ESPN’s 3D network. It’s clear that 3D TV manufacturers are invested in the concept of bringing stereo to the home. And, they are doing what they can to make sure that viewers have access to 3D content. After all, what’s the point of buying a 3D TV if there is nothing to watch in stereo? Didn’t we say the same thing about HD just a few years ago? It started with some very expensive TVs and a bit of content, but soon the prices dropped to where the average person could justify the purchase, and more and more content thus became available. Now it’s readily available in the home. Soon, so, too, will be 3D. n
CHIEF EDITOR
2
March 2010
[email protected]
The Magazine for Digital Content Professionals
E D I TO R I A L
Karen moltenbrey Chief Editor
[email protected] • (603) 432-7568 36 East Nashua Road Windham, NH 03087
Contributing Editors
Courtney Howard, Jenny Donelan, Audrey Doyle, George Maestri, Kathleen Maher, Martin McEachern, Barbara Robertson
WILLIAM R. RITTWAGE
Publisher, President and CEO, COP Communications
SA L E S Lisa BLACK
Associate Publisher National Sales • Education • Recruitment
[email protected] • (903) 295-3699 fax: (214) 260-1127
Kelly Ryan
Classifieds and Reprints • Marketing
[email protected] (818) 291-1155
Editorial Office / LA Sales Office:
620 West Elk Avenue, Glendale, CA 91204 (800) 280-6446
P rod u c tio n KEITH KNOPF
Production Director Knopf Bay Productions
[email protected] • (818) 291-1158
MICHAEL VIGGIANO Art Director
[email protected]
Chris Salcido
Account Representative
[email protected] • (818) 291-1144
Computer Graphics World Magazine is published by Computer Graphics World, a COP Communications company. Computer Graphics World does not verify any claims or other information appearing in any of the advertisements contained in the publication, and cannot take any responsibility for any losses or other damages incurred by readers in reliance on such content. Computer Graphics World cannot be held responsible for the safekeeping or return of unsolicited articles, manuscripts, photographs, illustrations or other materials. Address all subscription correspondence to: Computer Graphics World, 620 West Elk Ave, Glendale, CA 91204. Subscriptions are available free to qualified individuals within the United States. Non-qualified subscription rates: USA—$72 for 1 year, $98 for 2 years; Canadian subscriptions —$98 for 1 year and $136 for 2 years; all other countries—$150 for 1 year and $208 for 2 years. Digital subscriptions are available for $27 per year. Subscribers can also contact customer service by calling (800) 280 6446, opt 2 (publishing), opt 1 (subscriptions) or sending an email to
[email protected]. Change of address can be made online at http://www.omeda.com/cgw/ and click on customer service assistance.
Postmaster: Send Address Changes to
Computer Graphics World, P.O. Box 3551, Northbrook, IL 60065-3551 Please send customer service inquiries to 620 W. Elk Ave., Glendale, CA 91204
Luxology Sheds New Light on CG Imagery Luxology, an independent technology company developing next-generation 3D content creation software, presented a new Studio Lighting & Illumination Kit (SLIK), created by Yazan Malkosh, which includes an extensive collection of presets, scenes, items, materials, and video tutorials to help customers light computergenerated (CG) creations. Designed specifically for users of Modo, Luxology’s 3D modeling, painting, and rendering application, SLIK provides realistic lighting of scenes that mimics those found in the real world of photography. “CG artists and designers often spend a fair amount of time refining lighting solutions to bring out the best in their models,” says Brad Peebler, president of Luxology. “With SLIK, artists are easily able to bring the realism of a studio shoot into a CG-based production and create amazing 3D renderings that are vivid and believable.” The kit includes an extensive collection of accurately modeled lights, tripods, booms, cameras, reflectors, and platform backdrops, all of which have been modeled to a
high level of detail to bring studio-quality production values to customers’ scenes. The kit also includes prebuilt lighting setups that support small-, medium-, and large-scale, as well as portrait studio, photography. The Studio Lighting & Illumination Kit is available now for $125. To use the kit, users must have Modo 401, sold separately.
PRODUCT: LIGHTING
Nvidia RealityServer Propels 3D Cloud Computing Nvidia and Mental Images revealed the RealityServer platform for cloud computing, a combination of GPUs and software that streams interactive, photorealistic 3D applications to any Web-connected PC, laptop, netbook, and smart phone. Nvidia RealityServer—the culmination of nearly 40 collective years of hardware and software engineering by Nvidia and Mental Images—enables developers to create a new generation of consumer and enterprise 3D Web applications, all with high levels of photorealism. For instance, automobile product engineering teams will be able
to securely share and visualize complex 3D models of cars under different lighting and environmental conditions, while architects and their clients will be able to review sophisticated architectural models, rendered in different settings, including day or night. Online shoppers will be able to interactively design home interiors, rearrange furniture, and view how fabrics will drape—all with perfectly accurate lighting. The RealityServer platform comprises an Nvidia Tesla RS GPU-based server running RealityServer software from Mental Images. While photorealistic imagery has traditionally taken
PRODUCT: POSTPRODUCTION 4
March 2010
hours or days to create, this unique, integrated solution streams images of photorealistic scenes at rates approaching an interactive gaming experience. RealityServer software utilizes Mental Images’ Iray technology, the world’s first physically correct raytracing renderer that employs the massively parallel CUDA architecture of Nvidia GPUs to create accurate photorealistic images by simulating the physics of light in its interaction with objects. The RealityServer platform is available now. Tesla RS configurations start at eight GPUs and scale to support increasing numbers of simultaneous users.
Intensity Pro introduces professional HDMI and analog editing in HD and SD for $199 Intensity Pro is the only capture and playback card for Windows™ and Mac OS X™ with HDMI and analog connections. Intensity Pro allows you to upgrade to Hollywood production quality with uncompressed or compressed video capture and playback using large screen HDTVs.
Connect to Anything! Intensity Pro includes HDMI and component analog, NTSC/PAL and S-video connections in a low cost plug-in card. Capture from HDMI cameras, VHS and Video8 decks, gaming consoles, set-top boxes and more. Playback to large screen televisions and video projectors.
Beyond the Limits of HDV
Microsoft Windows™ or Apple Mac OS X™
HDV’s heavy compression and limited 1440 x 1080 resolution can cause problems with quality and editing. Intensity Pro eliminates these problems and lets you choose from uncompressed video, Online JPEG and Apple ProRes 422 for full 1920 x 1080 HDTV resolution. Now you can capture in 1080i HD, 720p HD or NTSC/PAL video.
Intensity Pro is fully compatible with both Adobe Premiere Pro on Windows™ and Apple Final Cut Pro on Mac OS X™, as well as Motion™, Color™, DVD Studio Pro™, After Effects™, Photoshop™, Encore DVD™, Combustion™, Fusion™ and many more.
Playback to your Big Screen HDTV Use Intensity Pro’s HDMI or analog output for incredible big screen video monitoring. Unlike FireWire™ based solutions, Intensity uses an uncompressed video connection direct to Final Cut Pro’s real time effects renderer. No FireWire compression means all CPU processing is dedicated to more effects and video layers!
Intensity Pro
$199
Learn more today at www.blackmagic-design.com
JVC IF-2D3D1 Aids Stereo Workflow JVC Professional Products unveiled the IF-2D3D1 stereo image processor, available this month, which works as a 2D-to3D converter and as a 3D left/right mixer. Compatible with various HD formats, the offering is designed to help 3D content producers improve their workflow, whether they’re converting archived 2D material or shooting original content in 3D. Through the use of unique JVC algorithms, the IF-2D3D1 converts 2D content to 3D in real time, offering no fewer than four 3D mixed formats—line-by-line, sideby-side half, above-below, and checkerboard—which combine left-eye and righteye images for stereo video output on a compatible device. Additionally, the IF-2D3D1 can output discrete left and
PRODUCT: STEREO
6
March 2010
right signals via HD-SDI or HDMI for dual projection or editing. Output can be adjusted for parallax (image displacement) and 3D intensity—both with natural, anaglyph, and sequential viewing modes. Generally, 3D footage is shot using a pair of stereo cameras, but producers had no practical way of real-time monitoring on location. The IF-2D3D1 combines the left/ right images, and users only need a 3Dcapable monitor to view the results. Content creation workflow can also be improved through the Scope feature,
which provides a waveform monitor and vectorscope for comparing both video streams on a display to ensure that the settings for both cameras match. The Split feature combines the two video streams on one screen with a moveable boundary, allowing instant left/right comparison. Rotation ensures that both streams can be viewed the right way up and in sync when one of the two cameras has to be positioned upside down to ensure correct spacing. The IF-2D3D1 costs $30,000.
CGW :808_p
7/16/08
11:45 AM
Page 1
Helping Alvin and Friends Take the Stage
D
espite a repeat performance from the incomparable Alvin, Simon, and Theodore, Alvin and the Chipmunks: The Squeakquel posed a unique challenge for the digital artists at Rhythm & Hues (R&H). The second installment in the film franchise incorporated 20 minutes more of animation and doubled the number of main 3D characters with the addition of The Chipettes, three female chipmunk counterparts. In addition to the expanded cast of characters, action sequences were extended to feature the chipmunks playing football, dodgeball, and cheerleading with live-action high schoolers. As with any live-action/computer-generated film hybrid, boosting workflow interactivity and speed enables artists to iterate more and produce better, more seamless results. To that end, R&H deployed Nvidia Quadro professional graphics cards across 1000 workstations at facilities in Los Angeles, Mumbai and Hyderabad (India), and Kuala Lumpur (Malaysia) to help preview their work more quickly and visualize CG characters and elements in their scenes. “There was a tremendous amount of CG work demanded in this film, so being able to leverage Nvidia Quadro technology to preview our work quickly and generate many iterations of scenes was essential,” says Mark Brown, vice president of technology at R&H. To Chris Bailey, an independent animation supervisor on the film, the pipeline technology is an invisible part of the digital artist’s arsenal. Bailey was involved in the film from the earliest stages, starting while the script was in development, working with a crew of storyboard artists to break down the visual effects. His mandate was to cinematically meet the director’s vision, while keeping overall shot count in check. On Alvin and the Chipmunks: The Squeakquel, Bailey represented Alvin, Simon, Theodore, and The Chipettes on set, acting out their performances with toy stand-ins in the context of the live-action performers. He also was equally involved with the animators, as sequences were tied together to fine-tune CG performances. “Computer graphics technologies are as much of an indispensable tool today in animation as a pencil,” explains Bailey. “What’s great about drawing with a pencil is that you can just drag it across a piece of paper and make an instant statement. When I started making CG and animated movies over 20 years ago, the delay in posing a character and getting it to move was actually [many] minutes—you could go to lunch just to get the wireframe to compute and play back,” continues Bailey. “I remember when it became a phenomenal achievement to manipulate a CG wireframe of a character and play the animation back in real time. With access to today’s computer graphics technologies, you can do all of that with a fully shaded character, and animating on screen becomes as immediate as drawing on paper.”
Having overseen the CG artists on The Squeakquel, Bailey attests that animators get pulled out of the moment every time they have to wait for a shot to take effect. By removing or minimizing the barriers between what is in the artist’s head and the time it takes to get it on screen, the creative process is much more fluid and the results become much more compelling. While graphics cards run in the background of the anima-
R&H combined CGI and live action in The Squeakquel. tion pipeline, to most of the artists who work on them, they are a key component to providing R&H artists with the interactivity necessary, and the ability to iterate and experiment that’s required to achieve optimal results. Part of the efficiency of R&H’s animation process is due to a series of proprietary tools that have been battle-tested and production-proven over the company’s many film projects, which include multiple visual effects Academy Awards and nominations. This tool set includes Icy for compositing, Voodoo for animating, Wren for rendering, and Eve for playback; they run alongside Autodesk’s Maya modeling software and Side Effects Software’s Houdini effects package. For this particular film, R&H software engineer Nathan Cournia developed custom code for previewing lighting interactively on the GPU from within Voodoo, without having to create a time-intensive, full render of the scene. Thus, artists and executives could quickly use high-quality preview renders to make decisions about the direction of a shot or evaluate various lighting options almost instantly. “On The Squeakquel, Quadro’s geometry throughput allowed artists to visualize high-fidelity models and lighting scenarios. Being able to view high-resolution geometry at real-time rates aids both animators and lighters in quickly making artistic decisions,” Cournia notes. ■
USER FOCUS: CHARACTER CREATION 8
March 2010
New Multibridge Pro has SDI, HDMI and Analog editing with multi channel audio for only $1,595 Multibridge Pro is the most sophisticated editing solution available. With a huge range of video and audio connections and the world’s first 3 Gb/s SDI. Advanced editing systems for Microsoft Windows™ and Apple Mac OS X™ are now affordable.
World’s Highest Quality Multibridge Pro includes 3 Gb/s SDI and Dual Link 4:4:4 SDI for connecting to decks such as HDCAM SR. Unlike FireWire, Multibridge Pro has a 10 Gb/s PCI Express connection for powerful HD real time effects in compressed or uncompressed video file formats.
Connect to any Deck, Camera or Monitor
Microsoft Windows™ or Apple Mac OS X™
Multibridge Pro is the only solution that features SDI, HDMI, component analog, NTSC, PAL and S-Video for capture and playback in SD, HD or 2K. Also included is 8 channels of XLR AES/EBU audio, 2 channels of balanced XLR analog audio and 2 channel HiFi monitoring outputs. Connect to HDCAM, Digital Betacam, Betacam SP, HDV cameras, big-screen TVs and more.
Multibridge Pro is fully compatible with Apple Final Cut Pro™, Adobe Premiere Pro™, Adobe After Effects™, Adobe Photoshop™, Fusion™ and any DirectShow™ or QuickTime™ based software. Multibridge Pro instantly switches between feature film resolution 2K, 1080HD, 720HD, NTSC and PAL for worldwide compatibility.
Advanced 3 Gb/s SDI Technology With exciting new 3 Gb/s SDI connections, Multibridge Pro allows twice the SDI data rate of normal HD-SDI, while also connecting to all your HD-SDI and SD-SDI equipment. Use 3 Gb/s SDI for 4:4:4 HD or edit your latest feature film using real time 2048 x 1556 2K resolution capture and playback. The Drawn Together images are courtesy of Comedy Partners.
Multibridge Pro
$1,595
Learn more today at www.blackmagic-design.com
Blur Studios Hooks Audiences with Animated Goldfish Campaign
I
f you missed the third season final episode of the Goldfish crackers animated commercial series, Gilbert is gone. The unfortunate cracker character was sucked up by a vacuum cleaner. But Gilbert’s loss turned out to be Blur Studios’ gain. The Venice, California-based production company designed, developed, produced, and finished the follow-on 10-spot series of commercials for Pepperidge Farm Goldfish crackers and ad agency Young & Rubicam. The Season 4 series featured the familiar cast of adventurous
the multi-season campaign in which the Goldfish characters explored environments outside their world “under the bed.” In Season 4, they engage in action-packed antics that included flying down a staircase, going for a spin on a cuckoo clock, and playing chase with a hound dog. Blur’s animation team worked on the campaign for a year, which included three months of preproduction and nine months of animation. Young & Rubicam, New York selected Blur after a multivendor proposal/test process. Having completed multiple film, television, commercial, and game projects, Blur’s animation team was well versed in character animation and ready to dive into this project. Content producer Salima Millott says, “We selected Blur for Season 4 because we wanted an entire season of episodic spots with a cinematic look and feel. Blur delivered and treated our commercials as they would a film or television project, with film-quality animation, environments, and nuanced performances.” According to art director/CG supervisor Dan Rice, to help define the characters’ world view, Blur created what the crew termed “the Goldfish Perspective.” “We asked ourselves, ‘How does the everyday world look from a cracker’s point of view?’ We used a variety of editorial tricks to draw the audience into each adventure. Micro lens photography brought the audience up close and personal to the characters, and building an entirely CG world gave us great flexibility and allowed our cameras to get into the nooks and crannies of this large world, which created a totally new experience for audiences.” Santos adds, “We packed an incredible amount of action into each tale. We pushed our camera work to convey how large and exciting a regular living Blur created a series of animated television commercials that follow the CG Goldfish room would be for characters as small as ours.” on exciting, colorful out-of-bowl experiences. Rice notes that color and light also helped drive the animated Goldfish characters (plus a few new friends), but it story throughout. “Rich textures, realistic materials, and environmarked the first time the campaign presented a progressive ments made the final sophisticated touches to give top-notch story arc spanning 10 commercials: the search for Gilbert. production value to the whole campaign. We wanted audiencDirected by Blur’s Leo Santos and executive produced by es to feel the fuzziness of a rug, the textures of a couch…that Blur’s Al Shier, the series debuted in mid-November, with new kind of thing, to give them the sense that they’re right there.” With familiar, compelling characters at the heart of the series, spots revealing twists in the Goldfish mystery every six weeks. The first spot, “Blast Off,” took the Goldfish gang on a paper reinforcing the Goldfish characters’ personalities and relationairplane ride in search of their lost friend. It was the first time in ships was a key component of the project. “As a group, these
USER FOCUS: ANIMATION 10
March 2010
and color to help kids associate with each character’s individual traits.” To bring the characters and environments to life, Blur used Autodesk’s Softimage for character animation and modeling. The artists completed the environments and effects using Autodesk’s 3ds Max, and used Mental Images’ Mental Ray to render the spots and add cinematic lighting effects. With compelling characters and lots of adventure, Blur created a cinematic series that hooked young audiences even further into the Goldfish world. Blur modeled and animated the characters and backgrounds using Softimage, and While each TV spot is a self-contained adventure, used 3ds Max for further set work and for effects. when viewed in a series on the Web, they play like a six-minute animated short film. characters needed to be like a community of friends that kids For Blur, the project was a true pleasure to produce. “This can identify with,” says Santos. “Each Goldfish character has a distinct personality, from exuberant to adventurous, and the was such a great campaign to work on. It’s character-driven animation really brought them to life. It was important that each and story-driven—an epic journey,” says Shier. For Young & could be expressive in close-up shots, and they needed to talk Rubicam, the experience was positive, as well. Says Millott, “Blur artists are not only incredibly creative, they are also just and emote with believability.” Rice points out, “We worked carefully on the character great people to work with.” As for Gilbert…you’ll have to check blur.com to find out how rigging, color, and textures to give them a lifelike feel, and emphasized their distinct personalities through shape, form, he fared. ■
Windows-32 / Windows-64 / OS X-32 / OS X-64
32-bit only $399
High Performance Camera Tracking
Use SynthEyes for animated critter insertion, fixing shaky shots, virtual set extensions, making 3D movies, architectural previews, accident reconstruction, virtual product placement, face and body capture, and more.
Includes Stereoscopic features used in AVATAR March 2010
11
■ ■ ■ ■
Animation
Imageworks manipulates photographic elements, creates CG characters and environments, and puts it all together in stereo for Tim Burton’s Alice in Wonderland By Barbara Robertson
12
March 2010
Animation
n n n n
L
isten to the visual effects crew at Sony Pictures Imageworks who worked on Disney’s Alice in Wonderland describe postproduction for this film, and you’d think director Tim Burton had dropped them down a rabbit hole, too. “I like doing something different, and this film begged to use bizarre techniques that we haven’t done before,” says Imageworks’ senior visual effects supervisor Ken Ralston, who counts an Oscar for the pioneering film Who Framed Roger Rabbit among the five he has won for best visual effects. “Everyone was not normal,” says Carey Villegas, referring to the live-action characters in the film. “We scaled or manipulated everyone in some way.” Villegas was one of two VFX supervisors who worked with Ralston on Alice. Sean Phillips, the second visual effects supervisor, adds, “I’ve never seen anything like this before—mixing and matching a wacky blendo of stylized photography with CG characters. We’ve done that on a small scale, but never before had a CG character next to a live-action character next to one with a CG head next to one scaled up or down, all having a conversation together. We had to make them live inside that stylized world and not get lost in it.” And this time, they had themselves to blame, at least in part, for the wackiness.
We’re All Mad Here
From the top down: Wooden frogs painted green give Helena Bonham Carter a proper eye line on set. Imageworks filmed Carter’s head at 4k resolution, shrank her body to HD resolution, and blended the two. The frogs, birds, and monkeys are CG; the courtiers are filmed elements. At bottom, once animated, the animals get fur, feathers, and textured skin. At left, the final image. ©2010 Disney Enterprises, Inc.
Ralston, who joined the project before Burton had finished the script, worked with the director to design what became the “wacky blendo” look of the film. “We decided we didn’t want to go down an all-mocap look, like Beowulf,” he says. “We wanted to see the actors.” And with a cast like this, it’s no wonder: Johnny Depp (Mad Hatter), Mia Wasikowska (Alice), Helena Bonham Carter (Red Queen), Anne Hathaway (White Queen), Crispin Glover (Knave of Hearts), and Matt Lucas (Tweedledee and Tweedledum). “Tim [Burton] wondered what we could do to put actors in a limbo area, where they were not quite real, but real,” Ralston says. So, Ralston worked with artists at Sony Pictures Imageworks and character designers Michael Kutsche and Bobby Chiu, who he found after an Internet search, to give Burton interesting ideas. The team decided to enlarge Johnny Depp’s eyes, swell Carter’s head and shrink her waist, and fiddle with color timing for Anne Hathaway. They sat Crispin Glover’s head on a tall CG body, and pasted Matt Lucas’s face on two digital Tweedles. “When we started, Tim [Burton] wanted to be sure we didn’t just have live-action actors in a virtual world with CG characters,” Villegas says. “We did all that manipulation to bridge the gap and bring the two worlds together.” The only real person is Alice, in that the artists didn’t manipulate her image, but she’s never the same size twice. In addition to photo manipulation, Imageworks created some 30 CG characters, including the White Rabbit, Cheshire Cat, Jabberwock, Bandersnatch, the Red and White Knights, March Hare, and more—every creature in the film that isn’t human is animated. Artists in the studio blended all these CG characters with the manipulated shots of the actors and the photo/CG characters inside virtual environments. Then, they converted everything into stereo 3D. And all on a short schedule. “I’m always amazed by the sheer volume,” Phillips says. “We had 1700 shots with fully fleshed-out CG environments, and we had a year to do the whole thing.” Among the tools used by artists at Imageworks were Autodesk’s Maya and Mudbox for modeling, rigging,
March 2010
13
n n n n
Animation
Once Alice arrives in Wonderland, the environments are nearly always CG. The scale of the computergenerated plants in this garden helps us understand that she’s very small. animation, cloth and hair simulation, and some effects; Side Effects Software’s Houdini for effects; Maxon’s Cinema 4D for projection painting; Adobe’s Photoshop for matte painting; The Foundry’s Nuke, Autodesk’s Flame, and Imageworks’ own Katana for compositing; Katana for lighting; and Imageworks’ Arnold for rendering. Other studios helping on the project, largely with the stereo 3D conversion and environments, were CafeFX, Matte World Digital, Nvizage, and Svengali, with The Third Floor providing previs.
Come, We Shall Have Some Fun Now
The film begins and ends in Victorian England, in live-action scenes, but most of the story takes place in Underland, also known as Wonderland, where Alice finds herself, a decade after her first fall, once again on a trippy adventure with her wacky childhood friends, all of whom managed to be on set with her during filming. Even the CG characters. “We recorded the animated actors in London, but we had voice talent on the set to have eye lines,” Ralston says. “Alice is six inches, two feet, eight and a half feet, and, for one moment, 20 feet tall, and then at the end, her normal height. So, the eye lines changed from scene to scene. If we didn’t have the real voices, we had voice talent in green suits.” Because most of the film takes place in CG environments, the sets were green with green props arranged to follow production designer Robert Stromberg’s plans. “We had at least three, 360-degree greenscreen environments 60 feet tall by 300 feet long,” describes Villegas. “We had them on rails so we could open and close them as needed. We lived in a greenscreen world for many months.” The actors could view a miniature to see 14
March 2010
what the environment would look like, and when Alice changed sizes, green platforms on set helped the actors have the right eye lines. “The scales are all nuts,” Ralston says. “It took an awful lot of tweaking.” In one shot, for example, a tall Alice holds the Mad Hatter’s hat. “Mia [Alice] is shorter than Helena [Red Queen], and Alice is supposed to be eight and a half feet tall,” Villegas says. “So we shot Mia on a platform. In post, we scaled everything so her feet touch the ground, but her eye line stayed in the same place.” In another scene, a tiny Alice jumps onto the Mad Hatter’s hat, crawls around the brim, drops to his shoulder, and then the two walk into the CG forest. “We used motion-control photography for that shot,” Villegas says. “It was the only time we shot motion control.” To make the scaling possible, the crew filmed scenes with Alice in odd sizes, all shots with Helena Bonham Carter, whose head needed to double, and other shots involving abnormally sized humans with a Dalsa Evolution 4k digital camera; a Genesis HD camera handled the rest. By scaling Carter’s body down to HD resolution, for example, they doubled the size of her head, which they had shot at 4k. Actors working with Carter pretended her eyes were about two inches higher when they looked at her. The same technique made Alice shrink and grow.
A World of My Own
In the film, Alice begins her journey at a garden party. Led by the White Rabbit into the forest, she falls down the rabbit hole and lands in a round room, a set built at normal size. Then, she shrinks to two feet tall and walks into a CG garden that looks like an English garden in disrepair. From that point until the
end of the film, with few exceptions, the environments are digital: the mushroom forest where she meets Absalom the CG caterpillar and is chased by the vicious CG Bander snatch, the Tulgey Wood where she meets the CG Cheshire Cat, a big desert environment made from projected matte paintings, the environment around the tea party (that takes place in a set), the ruins of an old temple, the Red Queen’s castle, and more. “There are a lot,” Phillips says. “And we had the challenge of using the environments to reinforce Alice’s size.” When she’s six inches tall, big dandelion seeds float through the environment. When she’s large, she ducks through CG doorways. The modelers created details based on how close the camera would move, the scale, and the amount of time the characters would spend in the environments. So that Burton wouldn’t be shooting blind in this greenscreen world, Imageworks modeled every CG environment before he filmed the live-action actors. “The idea was that Tim could virtually see the characters in that [digital] environment on set,” Villegas says. To help Burton see Alice and the other characters in their Wonderland environments as he filmed the actors, the crew used two solutions for real-time camera tracking. The EncodaCam system from General Lift and Brainstorm tracked anchored cameras and inserted characters shot on the greenscreen stage into the backgrounds from the camera’s viewpoint. An optical system developed by Imageworks’ R&D department tracked handheld cameras by focusing on markers in a grid on the ceiling. In addition to giving Burton and his director of photography a way to see the actors in the forest, in the Red Queen’s bedroom, and various other places, Burton’s editorial department used the composites as a guide when they cut the movie.
What Size Do You Want to Be?
Every shot in the film involving live-action photography moved through a pre-composition department led by Villegas. First, though, the matchmovers tracked the camera using the 4k-resolution frames. “The matchmove systems don’t really know about resolution,” Villegas says. “So we could scale the film back and maintain the matchmove.” Once the pre-comp department had the camera, the artists could prep the plates. “On a traditional movie, pre-comp is usually the easy part,” Villegas says. “On this movie, precomp is where some of the really hard work was done.” The artists had to key the characters as usual, and then had to fit the variously scaled live-action elements into CG environments.
n n n n
Animation
“We had to reframe most of the shots because of the scaling,” Villegas says. “And, we made those decisions in the pre-comp phase. [DP] Darius Walsky did the initial framing on set, but he was gone by this time. So, we worked with Tim [Burton] and Ken [Ralston].” When the artists had positioned all the characters in their final sizes and reframed the shot as if the characters had all been photographed in those sizes, the photographic elements moved on to layout for integration with CG props and the CG environment. From there, the rough compositions moved to animation and stereo layout. “At this point, the pipeline became a little more standard,” Phillips says. For characters like the Knave of Hearts and the Tweedles, which were part-photographed
As it turned out, the animators did not use motion-captured data for the Knave of Hearts. “Crispin [Glover] practiced walking on stilts and got it down, but when you translate that into a properly portioned [tall] human, the walk looks funny,” says David Schaub, animation supervisor. “So we animated him from the ground up using his performance as reference. It was difficult to animate his body under the head, which was extracted, stabilized, and on a card in Maya. It was the same with the Tweedles.” For the Tweedles, the team mocapped Lucas wearing a padded costume on set. The animators used that data and reference footage to help perform the two characters’ bodies. “We restricted his movements with a pear-shaped foam suit, so his arms didn’t move past the
more complicated for the animators, who needed to consider Alice in depth as well. “If a CG character holds hands with a live-action actor in 2D, you can cheat that,” Schaub says. “But in stereo, the depth gives away the cheat.”
A Grin Without a Cat!
At the peak of production, Schaub led a team of 63 animators: specialists who concentrated on the hybrids, and others who animated the non-human CG characters. “We had so many specialized characters that appear in only a small cluster of sequences, it didn’t make sense to have a team working on them,” Schaub says, “although in the throes of production, it became a melting pot. We had nine months to do the animation.”
actors and part-CG, the animators could see photographic elements glued to 2D cards that always turned toward the camera within a 3D scene. “Our biggest rigging challenge was incorporating the photo-heads,” Phillips says.
This Is Impossible
To create the Knave of Hearts, for example, animators needed to fit a seven-and-a-halffoot-tall CG body onto Crispin Glover’s moving head. On set, Glover sometimes stood on a platform, and sometimes walked on stilts. He wore a suit fitted with Xsens Technologies’ inertial motion-capture system MVN (formerly Moven) to give animators the option of using motion-capture data for his body. “It would have been really hard for the animators not to have had the head in the 3D system,” Villegas says. “They needed the system to visualize.” Similarly, when Alice rides a CG Bandersnatch, animators had the live-action photo graphic element on a card in the 3D environment. And, for the Tweedles, a card that tracked with Matt Lucas’s shoulders made it easier for the animators to perform CG bodies and place the photographed eyes and mouth on the CG heads. “We aren’t projecting photographs onto geometry,” Phillips says. “We’re trying to orient the geometry to the photographs. But, we gave the animators the flexibility to change the photography if they needed to.” 16
March 2010
At top left, Matt Lucas on the right and a stand-in on the left wear the Tweedle suits for motion capture on set, then switch positions. At top middle, artists apply Lucas’s eyes and mouth to the CG characters and use video from each performance as reference. At top right, animators tweak the body mocap and technical artists add cloth simulation. Above, the final image. limits of the CG character,” Villegas says. “He performed with a stand-in. When Tim was happy with Tweedledee, [Lucas] switched places and became Tweedledum to get facial performances for both characters.” To make those performances believable on the CG characters, the animators worked with fully rigged faces using reference footage as a guide. “Even though we used Matt Lucas’s real eyes and mouth, all the stuff around them needs to move,” Schaub explains. “His eye movements need to pull the skin around the eyes. It looks weird, but in a good way.” Because Alice’s size changes from sequence to sequence, the CG Tweedles often needed to grow and shrink, and stereo made all this even
The characters ranged from the whimsical, fanciful Cheshire Cat to the vicious Bandersnatch, to natural animals like a horse and a bloodhound. “Tim wanted to shoot the movie and not deal with animals,” Schaub says. Riggers started with generic bipedal rigs for the humans, chess pieces, monkey, frog footman, and executioner, and with standard rigs for the quadrupeds, such as the dogs, pig, and hoofed creatures. “On past shows, like Narnia, we rigged characters so they could do calisthenics,” Schaub says. “No matter how you posed them, they’d be anatomically correct. But it turns out you need only 20 percent of those cases. So gone are those days.”
modo 401® Dracula image by Rick Baker
n n n n
Animation
All the animals in Alice are CG, including this bloodhound cuddled by Anne Hathaway. Imageworks used cloth simulation to help make his floppy ears and loose skin believable. Instead, the riggers created simpler rigs for what they knew the character would do, rather than for anything a character could do. “We don’t rig for every eventuality and carry all that stuff around,” Schaub says. “We have a rigging team ready to build one-off features on a shotby-shot basis.” The Bandersnatch provides an example of such a one-off rigging solution. “He has a bear body and a bulldog face, with shark’s teeth,” Schaub describes. “The teeth pierce his lips, which was already tricky from a rigging perspective. When he licks Alice’s wound, we had a one-off rigging solution to get his big tongue through the teeth.” Similarly, when they realized the horse needed to talk, riggers built solutions only for the phonemes they needed for that specific dialog, rather than a rig that would allow the
horse to say anything. For the animators, the fanciful characters provided some of the most interesting creative challenges. “At first, we had the Dormouse run on its hind legs, and the White Rabbit did the same, Tom and Jerry like,” Schaub says. “We put that in front of Tim [Burton] to see his
At left, precomp artists frame the Knave of Heart’s (Crispin Glover) head within a CG environment. At right, the final image shows the Knave’s seven-and-a-half-foot CG body attached to the head and the environment fully textured, lit, and rendered.
Like No Place on Earth
Director Tim Burton filmed most of the movie on greenscreen stages. In the film, the actors appear alongside CG characters in CG environments. Knowing this project would be shown in stereo 3D, the crew on Alice considered shooting the liveaction elements with a stereo 3D camera, at first. “We started that way, but once we realized how much photo manipulation we’d do, we knew we couldn’t use a standard HD camera,” says Sony Pictures Imageworks visual effects supervisor Carey Villegas. “At the time, there were no 4k stereo camera rigs on the market.” But, they wanted to use a 4k-resolution camera so that in postproduction, they could scale some elements down to HD resolution and leave others, such as the Red Queen’s head, large. Moreover, says senior visual effects supervisor Ken Ralston, “this is so complex, and to have shot a true 3D movie would 18
March 2010
response. I think he was entertained with it, and we went down that path for a while.” After Burton finished filming, though, he asked for a subtler performance. “He didn’t want Roger Rabbit against the human performances,” Schaub says. “He wanted to be sure all the characters were in the same world. So we started going more and more subtle. Nothing overacted. Just the bare minimum to make the point.” The Cheshire Cat, for example, one of the first characters the team animated, moves its tail to accent the dialog spoken by actor Stephen Fry; it acts like a cat, not a human, even when it floats, appears, and disappears. To create emotions for a character with a perpetual grin, the animators played with extremes for the smile. “But much of it is in the eyes,” Schaub says. “As he goes in and out of light, we played up the idea of brightening the eyes. The iris lights up in a special way that suddenly brings a spirit of life into the character, as well.” Similarly, the animators played Absalom the caterpillar, voiced by Alan Rickman, with a subtle, disdainful attitude. “He wears a little monocle on his squished nose, so we have
have caused so much grief. Also, we thought that having stylistic control of the conversion would be interesting.” Interesting and tricky. “When you combine CG characters, and environments, and photographic elements, and try to dimensionalize everything, making everything feel like it’s in the same environment is complicated,” says Corey Tumer, Imageworks’ stereo 3D supervisor. The photographic manipulation and scale changes magnified the complexity. Tumer describes one shot by way of example. In it, the Hatter, actor Johnny Depp, is trying various hats on the Red Queen, Helena Bonham Carter. Carter’s head is twice its normal size, and Depp’s eyes are larger than normal, too. The Red Queen is looking into a mirror. Her back is to the camera; we see her reflection. On screen right, courtiers photographed by a secondunit team speak to the Red Queen. The environment is CG
Chris and Trish Meyer Authors of After Effects Apprentice
Image courtesy of FOX
Image courtesy of Prologue Films
Image courtesy of Capacity
3D FOR THE REAL WORLD
Image courtesy of Steam
“If you are an After Effects user looking to add a 3D program to your arsenal, CINEMA 4D should be your first choice.It has everything you need, and no other program integrates nearly as well with After Effects.”
Image courtesy of Troika
All copyrights and trademarks are the property of their respective holders.
CINEMA 4D
R 11.5 BROADCAST EDITION
The 3D Motion Graphics Leader New CINEMA 4D Broadcast Edition • • • •
2D/3D Integration with After Effects and Motion Includes the MoGraph 2 Module featuring MoDynamics Pre-configured, customizable motion design assets Production-ready broadcast templates
Download a free 42-day trial @ www.maxon.net
n n n n
Animation
him look down his nose at Alice,” Schaub says. “We make sure his body undulates as he talks and exhales, so you can feel a gelatinous movement in time with the dialog, all through keyframe animation.” The tense and nervous March Hare was one of the last characters the team nailed. “Tim [Burton] described him in a way that reminded me of Charlie Chaplin in Modern Times,”
puchin monkeys are bellboys, frogs are waiters, and flamingoes are mallets for a croquet match. A fish butler scoots around on its tail. Most of these characters have fur or feathers, or wear clothing, and the studio has been working on these modeling, grooming, and simulation problems since Stuart Little. “We’ve settled into a blend between hair and feathers that we call ‘furthers,’ which are like fat
(Top left) Animators used the Cheshire Cat’s tail to help emphasize the dialog for this character, which has a perpetual grin. (Top right) The tense and nervous March Hare is about to lose control once again. Schaub says. “We emulated Chaplin’s funny walk for a while, and gave [the Hare] a nervous twitch. And then, we pulled back.” The animators put the tense creature on all fours and had it hop like a hare, but every once in a while, it has a big nervous spasm. Many of the CG animals are under the Red Queen’s rule. She hates them and proves it by using the animals as furniture, tools, and servants. Flying birds hold up chandeliers, ca-
pieces of hair,” Phillips says. “We do the majority of feathers that way, although flight feathers on the wings are geometry. For the bloodhounds, we did a cloth-like simulation on the skin to get the jiggle, and for his ears.” One of the most impressive CG creatures in the film is the Jabberwock, which Alice must confront at the end, and which the team based on John Tenniel’s illustrations of the dragonlike animal in the original Alice in Wonderland
with photographed props—hats, hat stands, and furniture. CG monkeys hop around and give photographed hats to Johnny Depp on screen left. To create this shot, the pre-composition team cleaned up the photographic elements for the Hatter and the Red Queen shot on greenscreen stage, scaled the Queen’s head, added the courtiers, and framed the shot. The stereo team added dimension, but it was a mix-and-match, back-and-forth workflow. For example, to give the photographed courtiers dimension, the stereo team gave the pre-comp team a scene with stand-ins and a stereo camera. The pre-comp team projected the photographed images onto simple shapes, a pair for each image (one for the left eye, and one for the right). The stereo team then converted those shapes into more detailed geometry, a technique they used throughout the film for photographed elements—the Red Queen and the Hatter in this scene, Alice in other shots, and so forth. “We swap in shapes to give us curvature, such as a shoulder that protrudes, a nose, a huge belly, and then re-project the image and warp it to fit,” explains Tumer. Similarly, they projected the photographed hats onto 3D shapes when the CG monkeys pull them off the hat stand. “As the monkeys hopped across the screen, it felt as if they walked through one of the hat stands,” Tumer says. “So we animated a stereo camera to cheat them forward, and then animat20
March 2010
book. “Even though he looks like he should be able to fly, we make sure he can’t,” Schaub says. While Alice confronts the Jabberwock, a battle rages in the background between the red and white knights. Inspired by chess pieces, the white knights look like human figures with alabaster armor, which put the studio’s renderer to the test. “They’re translucent, so we used subsurface scattering in Arnold,” Phillips says. “It looked very cool when it was done.” The wide and thin red knights, by contrast, look like playing cards and are made of slightly bendy, interleaved steel plates. “Everything in the battle is CG,” Schaub says. “We had animators tear loose with lots of battle business.” Libraries of fighting characters created by the animators powered through the battle with help from the studio’s own crowd animation software. The combination of familiar techniques, challenged by creative designs with new techniques developed to handle the photographic manipulation and hybrid blends unique to this film, made working on Alice in Wonderland an amazing adventure for the crew. “After all these years doing so many movies, this might be the most creative thing I’ve ever done,” Ralston says. “And maybe the most fun. It’s been a blast.” n Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at
[email protected].
ed it back to get the connection with the Hatter.” Multiple stereo cameras with specific interocular and convergence settings came in handy throughout the film. In this scene, the Red Queen, Hatter, and props have one set of stereo cameras. The courtiers have another. The CG monkeys have a third. Similarly, when CG characters surround Alice, she has her own set of stereo cameras and the characters have another. “I originally thought I would use one set of cameras and it would be fine,” Tumer says. “But I found that the projected elements are so flat to begin with, if I overdo the stereo for them, it helps sell the shot and you no longer feel like they and the CG elements are two separate things.” –Barbara Robertson
1984
1992
2000
2009
Here’s to 25 more years of Three Guys on a Cart. Little did we know that this small demonstration would become such an icon. But in a world filled with disposable products, maybe it’s good to make a blatant show of
dependability and strength. So would you like a workstation that works as long and hard as you do? Then discover the entire family of Anthro furniture at anthro.com/cgw.
For these and other creative workstations, visit anthro.com/cgw or call 800.325.3841.
Digital effects help TV ads score with the Super Bowl audience By Debra Kaufman Super Bowl 2010 was a year to remember. With the Indianapolis Colts heavily favored, the New Orleans Saints still came marching in, thrilling die-hard fans in the Big Easy, as well as those across the nation rooting for the underdog to claim its first-ever world championship. But Super Bowl commercials this year weren’t nearly as thrilling. Although the cost of a Super Bowl spot has skyrocketed, the number of eye-catching VFX commercials on display this year was paltry. Has the money gone to buying the time instead of creating the spots? This year’s Super Bowl commercials took in an average of $3 million per 30second spot, with total ad revenue of $213 million. According to Mediaweek journalist Anthony Crupi, that is a 329 percent increase from CBS’s price of $700,400 for the 1990 game. Turning back the clock to 1967, a commercial running during the Super Bowl went for just $42,500. Fast-forward to 2010, when the most popular commercial was the extraordinarily simple Google ad, which charted
22
March 2010
a budding romance, marriage, and family simply through search terms typed into a Google box on a computer screen. It’s hard not to see this pullback from extravagantly visual spots as a response to the economic downturn. With thousands out of work, perhaps the brands reasoned, would it be unseemly to spend millions on a commercial? Yet, many did—with dismal results. Nonetheless, a handful of great-looking CG animation and visual effects did rise to the top, including one from Anheuser Busch, a company that always seems to score high in terms of commercial appeal, as well as Super Bowl newcomer Vizio, which used CG technology combined with star power and pop culture to peddle its televisions. Animated animals are always a sure bet, as Honda and Monster.com realized with a squirrel and beaver, respectively. And, with Valentine’s Day right around the corner, Teleflora got into the act and came up smelling rosy with animated flowers.
Bridge • Budweiser
Director: Paul Middleditch Agency: DDB Chicago Production company: aWhitelabelProduct CG company: The Mill NY In “Bridge,” the people of Any Town, USA, are distraught to learn that the Budweiser delivery truck can’t make it because the bridge is out. Determined to get their Budweiser, they form a human bridge to allow the truck to roll into town. Westley Sarokin, lead Flame artist at The Mill NY and
Broadcast
co-shoot supervisor with Yann Mabille, started pre-production on the spot in November, creating previs for the project using Autodesk’s Softimage. “In making a bridge out of people, we wanted to know what the bridge looks like and how it’s structured,” says Sarokin. “We went through many designs to come up with something that fell together but had a coherent shape.”
n n n n
Santa Paula, California, a picturesque little town near Los Angeles, became Any Town for the shoot, while the bridge sequence was filmed on a farm a few miles away. According to Sarokin, he and the crew arrived at the shoot site a few days early and sat down with the agency and director to map out what they wanted to capture on film. The bridge would consist of the rails (people standing on the side), the road (people laying down horizontally), and the five main
For this Budweiser Super Bowl spot, The Mill NY constructed a human bridge using live-action components and 200 digital people. The CG bridge was built in Softimage.
The goal was to shoot as many practical elements as possible to lend to reality, says Sarokin. “We were trying to get a rough idea for shot-blocking for the storyboards, but because we didn’t know the terrain yet, we kept previs as generalized as possible to get a basic idea of the size of the bridge and how they wanted it to move,” he explains. “We went over all the animation possibilities.”
pillars supporting the bridge. “Shot by shot, we looked at which elements we could use for multiple shots, and which elements we needed to get practically,” he says. Post-shoot, Sarokin and Mabille found they didn’t quite have everything they needed. “Every time you go out on set, you never really know what you’re going to get until you sit with an edit and see the footage come together,” says Sarokin. “On site, you do the best you can to match camera angles,
March 2010
23
n n n n
Broadcast
lighting, and so on, but it’s not an exact science. There were a few circumstances where we needed an element.” To fill in the missing pieces, the team set up a greenscreen shoot, using a Canon 5D Mark II camera, on the rooftop at The Mill. “We put people in the perspective we needed to use them as live-action components within the composite,” says Sarokin. “The most important thing was to expose things properly to match the time of day and lighting direction.” In post, Sarokin was the lead visual effects artist and Mabille was in charge of 3D. “The main point was to have as many realistic-looking people as possible and that the animation of their interrelations made sense,” says Mabille, who used Softimage as his main 3D tool. All told, the group created nearly 200 digital extras. “There are a couple of live-action plates of the base of the pillar that we used, and then we extended them with CG people,” Mabille
Despite the pre-planning, The Mill NY had a mere three weeks to get the plates to delivery. Still, “our approach and the techniques we set up early on served us well,” says Sarokin. “We were able to make it look as amazing as possible and add as much detail as we could.”
Forge • Vizio
Director: Wally Pfister, ASC Agency: Venables Bell & Partners Production company: Independent Media, Inc. CG company: MassMarket In this hyperkinetic spot, Vizio shows how the worlds of Internet video and television come together, as robotic arms snatch up singer Beyonce and place her inside a cubicle. As the camera pans, we see numerous cubicles, inhabited by the Numa Numa guy, the Twitter bird, musician Tay Zonday, a Flickr
The Vizio commercial combines star power, pop culture, live action, and CGI—the latter involving robotic arms and more created by MassMarket. describes. “And we built the smaller pillars entirely out of CG people. All the shots looking down at the bridge are re-created in CG as well, because we didn’t shoot any live-action people from this camera angle.” Softimage’s Ice was used to refine the simulations and handle the massive amount of data. “Ice allowed us to organize and manage the information,” says Mabille. “We had to assimilate the whole bridge animation with the truck that presses down on the people, and had to have them all react accordingly and without stretching.” The artists used The Foundry’s Nuke to do rough comps before sending them to Autodesk’s Flame for final compositing. “It lets us know if our 3D comps are going to work,” says Mabille. 24
March 2010
sign, a Facebook page, characters from movies, and more. Getting a jump on the project, Mass Market producer Paul Roy and his previs team started work between Christmas and New Year’s in anticipation of the four-day shoot that took place the first week in January at the Santa Monica Airport’s Barkar Hangar. Cinematographer Wally Pfister (The Dark Knight, The Prestige, Batman Begins) directed the commercial, his talents a perfect match for the spot’s moody look. “It was great working with Wally,” says Roy. “He is very familiar with 3D, greenscreen, and comps. He has done a lot of set extensions in the movies he’s done and worked with a lot of visual effects.”
The spot featured three sets of real cubicles, reveals Roy, but MassMarket then extended them with two computer-generated ones. “We added an extra row of them and then another set of five towards the back,” he says. Then there were the robot arms, key elements in the spot. Early on, the decision was made—for reasons of budget and time—to create them with CGI. “It’s actually cheaper to do it in CG than practically,” Roy points out. The production built a one-ninth-scale arm that was filmed for lighting references. “As soon as we had the design for the robot arm completed and approved, we built a CG version of that practical arm in the art department. We rigged it up so that it was ready before we finished shooting, which let us get started on the animation right away,” says Andrew Romatz, MassMarket visual effects supervisor for 3D. The digital content creation tool set that was used for the spot included Autodesk’s Maya, Adobe’s Photoshop, and Pixologic’s ZBrush, along with 2d3’s Boujou for camera tracking. Romatz and John Shirley, who was VFX supervisor for 2D, were both in attendance at the shoot. “Getting lighting to match was important,” says Romatz. “A lot of times, it’s hard to get those reference shots during a busy shoot, but Wally [Pfister] was very accommodating. In fact, he shot a lot of reference shots in 35mm.” On set, the group collected all the necessary data, captured HDR panoramic shots for lighting reference, and worked closely with the video playback person to check the grayscale and make sure it was the right angle. “It’s something you can’t foresee when you have a CG character pick up a 3D character,” Romatz explains. “You have to imagine timing and the impact of the CG, and anticipate how that will affect the live-action element so the actor can behave in a way that’ll look good.” The team conducted numerous animation tests, working closely with Pfister to find the sweet spot. “Some of the arms have more of an attitude, some are smooth and slower in how they grasp and pick things up,” says Romatz. Getting the lighting to work properly was the biggest hurdle. “The surface [of the robotic arms] was so flat and squared off at the edges, so it was challenging to keep the reflections from getting too broad or big,” Romatz says. “A lot of effort went to getting the reflections to look right,” he adds, and to this end, the team did several bright and dark passes. Meanwhile, rough pre-compos-
Learn by doing. Create studio-quality 3D animation and effects with these hands-on, expert guides. Your project guide to creating movies with the Blender open-source animation pipeline
Your project guide to creating realistic earth, wind, fire, and water effects in Maya
BLENDER STUDIO PROJECTS ®
The Open-Source Animation Pipeline
SERIOUS SKILLS.
Your project guide to creating breathtaking backgrounds and gaming graphics in Maya
MAYA STUDIO PROJECTS
maya Studio ProjectS
®
®
Dynamics
TONY MULLEN CLAUDIO ANDAUR
Michael McKinley
TODD PALAMAR
SERIOUS SKILLS.
LEE LANIER
Game Environments and Props
SeriouS S Skill SkillS. S.
mpany
r land, Composite Components Co Jeff FrdobysJote nathan Er Forewo
PROFESSIONAL DIGITAL COMPOSITING Essential Tools and Techniques
en
cre S n e e r G k The
o
handbo
Rld Real-Wo Tion PRoduc es u Techniq
SkillS. SeriouS
SERIOUS SKILLS.
Available at www.sybex.com/go/studioprojects or wherever books are sold. Wiley, the Wiley logo, and the Sybex logo are registered trademarks of John Wiley & Sons, Inc.
cover image by Paul Presley
n n n n
Broadcast
ites were done in The Foundry’s Nuke. Lead Flame artist Thibault Debaveye notes that it was tricky to find the proper look. “We had to be careful not to go too dark,” he says. “We wanted it moody, but we also had to consider the final color look. In terms of integrating CG and live action, we went back and forth with making sure the colors and lighting directions matched perfectly, and that we were catching shadows from the live-action elements.” In the end, however, the most demanding aspect of the spot was the one most often cited by VFX/animation houses working on Super Bowl commercials: the schedule. “Logistic issues arise from that,” says Roy. “We worked long hours every day and, on the last day, put in 22 hours. Getting everyone on the same page in three weeks is a big job.”
Squirrel • Honda
Director: Andy Hall Agency: RPA Production company: Elastic CG company: A52 To promote the new Honda Cross Tour, agency RPA worked closely with Elastic, which does conceptual and directing work, and visual effects firm A52—two divisions of the same company. “We have a long-standing relationship with RPA,” explains Andy Hall, who was both director and head of CG on “Squirrel.” “They approached us for the launch of the new Honda Cross Tour, with the idea of going back to the fundamentals of design and aesthetics.” A four-spot package—the final one of which was “Squirrel” for the Super Bowl—ensued. The design of the spot, which integrates animation and live action, is hard to miss: a low-polygon animation style heavily influenced from the design aesthetic of the 1960s. “Saul Bass was someone they referenced,” says Hall, referring to the legendary graphic designer/filmmaker. “We took the visual cue from that, and then emphasized it to be more contemporary.” “Squirrel” reveals the capacious storage of the Honda Cross Tour via a squirrel that hides a pineapple, trophy, bowling ball, barbell, and armchair. “It’s about finding the ultimate place to put stuff … and that’s the car,” says Hall. Some of the footage was already in the can when it came time to producing the Super Sunday spot. “We did two shoot-arounds of the car,” says Hall, “one early in the process, and then they decided to use another color of car for the ending, so we shot that in December.” All the live-action footage (which focused 26
March 2010
The CG in “Squirrel” reflects Saul Bass’s animation style from the 1960s, with its low-polygon look, only in this case, it was crafted using state-of-the-art 3D tools, including Maya. on the vehicle) was filmed in 35mm at South Bay Studios near Los Angeles. Hall’s background is in animation, so the process, for him, felt natural. “I storyboarded the whole thing in Adobe’s Photoshop and cut a rough animatic in Adobe’s After Effects based on the live action,” he says. “That became the footprint for timing and a cue for the camera angles I wanted to capture.” Once the animatic was in place, Hall points out, he started blocking out the shots and establishing the interactions in the six shots. “We did several frames up-front for the agency so they could share them with their client, for their comfort level,” he says. “And there’s that comedic element they wanted to introduce as the objects become more and more ridiculous.” Another striking aspect of the animation is its color palette. “That was established from the first spot of the package,” says Hall. “They wanted a distinct, vibrant look, and that’s followed through to the last spot, with consistency among all four. The last one, “Squirrel,” is focused on hues of oranges and reds, but aesthetically, they all complement one another.”
Although it had a simplistic 2D look, the animation was 3D, created entirely in Auto desk Maya. Compositing, however, was done in The Foundry’s Nuke. “Obviously, we use [Autodesk’s] Flame,” says Hall, “but it made more sense to use Nuke so we could work in floating point. That enabled us to push the minutiae between color ranges more easily. Also, we could tweak and adjust the light in Nuke, rather than going back and relighting in Maya.” The animation itself went very smoothly, notes Hall. “We’re always trying to push the performance of the character, and there’s an arc in the piece,” he says. “The performance is based on the objects the squirrel is interacting with, and it becomes more extreme; so, it starts quiet and builds towards the end. It was a matter of finding that arc.” Though the client’s aesthetic touchstone was Saul Bass, Hall found his own inspiration during the creation of the spot. “It was kind of referencing Chuck Jones in terms of the simplicity and the performance,” he says. “You’re not dealing with any dialog. It’s a low-res character captured by strong poses.”
AMAZINGLY FLUID REALLTIME VIDEO EDITING ADOBE MERCURY PLAYBACK ENGINE FUELED BY NVIDIA QUADRO ®
®
®
What are you waiting for? Get ready for the Adobe® Mercury Playback Engine and the highly anticipated release of Adobe® Premiere® Pro. With this breakthrough CUDA™-based technology, you’ll experience amazingly fluid, real-time video editing, even with multiple 4K streams. Upgrade today to an NVIDIA® Quadro® solution to see an immediate boost in performance with Adobe® Creative Suite® and be ready for what’s coming next.
Visit us at NAB booth # SL5629 Learn more at www.nvidia.com/mpe © 2010 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Quadro, and CUDA are trademarks and/or registered trademarks of NVIDIA Corporation. All company and product names are trademarks or registered trademarks of the respective owners with which they are associated.
n n n n
Broadcast
Mr. Warmth • Teleflora
Director: Tim Hamilton Agency: Fire Station Production company: Go Films CG company: Asylum Just in time for Super Bowl—and Valentine’s Day—Teleflora brought back its talking flowers in a story of comeuppance and comedy. A vain office worker, who snubs a timid colleague, is supremely self-satisfied to receive a box of flowers. But when she opens the box, she finds a mouthy, wilted tulip—voiced by none other than comedian Don Rickles—that berates the sender as well as the receiver. As the horrified woman slams shuts the box, the mousy co-worker receives a beautiful bouquet of Teleflora flowers in a vase, presented personally by a Teleflora deliveryman. Asylum created its first talking flower for
And that’s not always an easy thing.” Not all the flowers are digital, however. A florist and the art department put together a practical bouquet using real dead flowers. “The bulk of the bouquet is practical,” Warner says. Two sticks in the bouquet served as tracking markers, and the digital team used Andersson Technologies’ SynthEyes to track the practical flowers, so the compositors could drop the digital flowers right on top of the bundle. The four digital tulips were modeled and rigged in Autodesk’s Maya, with textures painted in Adobe’s Photoshop and Right Hemisphere’s Deep Paint. The color palette matched some of the existing flowers in the practical bouquet, although the crew had to fight the temptation to make the CG flowers more beautiful.
Asylum modeled and then inserted four computer-generated tulips into a bouquet of real (albeit wilted) flowers. The artists then tweaked the color and location of the real and fake flowers. Teleflora a year and a half ago to start off the campaign. For Super Bowl Sunday, there was pressure to come up with something that would up the ante. “And they did that with voice talent Don Rickles,” says Asylum executive producer Mike Pardee. “We also needed to up the ante with the look, by updating the textures, colors, and feel of the petals, not just to match the bouquet better, but also to make it feel more real.” The spot was shot first with a stand-in voice talent, with the VFX team in attendance, to ensure that enough space was left inside the flower box to insert the CG flora during post. “We worked with the director to make sure the framing was right, so when we have the digital flowers in the bouquet, they’d have enough room to breathe and act,” says lead animator/CG supervisor Mike Warner. The digital tulip that was the hero flower rises upward farther than the other flowers. “We pushed the drama of filling the frame further this time, so the flower could be more [theatrical],” says Warner. “In working with the actors, we find an eye line that will work for the director and also makes sense for us. 28
March 2010
“In this case, we want them to look dead and ugly,” says 3D lead Jeff Werner. “There was an ugly red flower in the practical bouquet that we changed more to a brown, so our main hero tulip, which is red, could stand out.” The animation was a process of iteration. “The danger of going too cartoony is that you want to avoid a lot of Warner Bros.-style squash and stretch,” says Werner. “Our part was to match that dialog just right so it felt like the flower was saying the dialog, and give it that little emotion to turn the flower into an actor. Between head motion and exaggerating the dialog with mouth motions, you get a lot more personality out of it.” Asylum used The Foundry’s Nuke for precompositing, with the final composite work done in Autodesk’s Flame. Rendering, meanwhile, was done within Pixar’s RenderMan. Lighting artist Eric Pender also did pre-comps to set up the lighting, generating passes and mattes so the client could adjust the colors and tones in the final composite. “That’s key, to let the client have all that flexibility in the end,” says Pardee. “Also, having a voice like Don Rickles was fun.”
Fiddling Beaver • Monster.com
Director: Tom Kuntz Agency: BBDO Production company: MJZ CG company: Framestore In November, BBDO creative director/ writer Steve McElligott and creative director/art director Jerome Marucci approached producer Anthony Curti with the concept of a fiddling beaver for a Monster.com commercial. They made two decisions: The beaver would be animatronic, and they would return to Framestore—where they had gone for two previous Super Bowl commercials—for all the CG creation and compositing. “We wanted it to be realistic, to show a busy-beaver lifestyle and how this one beaver (an animatronic from AnimatedFX in Los Angeles) was an outcast,” says Curti. The spot
called for an entire beaver world that had to be constructed quickly, so the group lost no time in contacting the VFX studio. Despite the quick contact, Framestore would have less than two and a half weeks to complete the work. At the time the project started, there wasn’t a storyboard yet, says Framestore animation supervisor Kevin Rooney. “All we knew was there was a hero beaver playing the fiddle and there would be background beavers,” he says. With time ticking away, the Framestore team immediately began creating a generic CG beaver model to get the process moving. When the modelers received some photos of the animatronic from the shoot, they began to shape their creatures—which would appear in the background shots—to be more in tune with the look of the mechanical star. “What we didn’t know was how far the beavers would be away from camera, and if they’d be swimming or on dry land,” says CG supervisor Jenny Bichsel. Next up was creating different fur grooms and various animation cycles for the CG animals. “We rigged our beaver models to do anything that might be asked of them,” says Rooney. “We didn’t spend much time setting up the faces because we knew there wouldn’t be much in terms of lip synching. [Rather], we
Broadcast their performances, and yet get across character with body language and facial expressions.” The Framestore crew crafted its CG creatures using Autodesk’s Maya for the modeling chores, rigging, and animation, with some pre-comping that was done in Apple’s Shake. Maya and Mental Images’ Mental Ray were Using Maya, the Framestore team modeled and animated 23 CG beavers, used for the rendering. matching their look to the animatronic star of the commercial. In the end, Framestore focused on swimming, building dams, preen- created 23 digital beavers, with two different ing, and other beaver stuff. That way, when we hairstyles (dry and wet), using its own groomgot the board, we’d have an animation library ing system and a newly developed fur shader. built up and could hit the ground running.” “That was more important for the close-up Once the storyboard came in, Rooney split shots, where the beavers were quite large in up the shots. “We’d already done the more nat- frame,” says Bichsel. “We also had to make uralistic beaver animation,” he says. “Now we sure they had distinctive color differences. We had to go back and tell a story through blocked were able to control the root and tip color of performances and, in particular, a couple of the fur and how coarse and shiny it was, so close-up shots of digi-beavers. We had to be we manipulated all these channels to get them faithful to the animatronic fiddler. We had to to look different. With our system, you can make sure ours looked like beavers in terms of paint while looking in the 3D viewport and
n n n n
move the direction of hair, and it updates very quickly.” As a result, the team spent less time on troubleshooting, contends Bichsel, and more time on grooming, painting, and general creativity. Senior Flame artist Raul Ortego, meanwhile, spent most of his time doing rig removal. “It was quite simple but labor-intensive work,” he says. When the rig was behind the animatronic, the work was much easier. But sometimes the rig was in front, requiring a track (within Autodesk’s MatchMover) and the addition of fur. Some shots had camera moves that had to be replicated in Maya. For rotoscoping, the tool of choice was Silhouette, named after the manufacturer. “This was a really fast turnaround for creating digital creatures,” concludes Jenn Dewey, VFX producer. “The final animation was [done] a few days before we were supposed to finish lighting the whole spot, and that was two days before it shipped to Flame.” Nevertheless, the spot worked well. n Debra Kaufman is a freelance writer for numerous entertainment industry publications. She also writes about content for mobile devices at www.MobilizedTV.com. She can be reached at
[email protected].
March 2010
29
■ ■ ■ ■
Trends & Technologies
Last month, two weeks before the main event, the Academy of Motion Pictures Arts and Sciences awarded 15 scientific and technical awards to 46 men who pioneered advances in moviemaking technology. Among the awards this year was a Scientific and Engineering award given to Per Christensen, Michael Bunnell, and Christophe Hery for developing point-based rendering for indirect illumination and ambient occlusion. The first film to use this rendering technique, Pirates of the Caribbean: Dead Man’s Chest, won an Oscar for the visual effects created at Industrial Light & Magic. Now available in Pixar’s RenderMan and widely adopted by visual effects studios, the point-cloud rendering technique has helped studios create realistic CG characters, objects, and environments in more than 30 films. Simply put, the technique is a fast, point-based method for computing diffuse global illumination (color bleeding). This point-cloud solution is as much as 10 times faster than raytracing, uses less memory, has no noise, and the amount of time it takes to calculate does not increase when surfaces are displacement-mapped, have complex shaders, or are lit by complex light sources. It owes its existence to a unique interplay between researchers in a hardware company, a software company, and two visual effects studios.
Bunnell’s Gem of an Idea The idea originated with Michael Bunnell, now president of Fantasy Lab, a game company he founded. Nvidia had just introduced its programmable graphics processing unit (GPU), and Bunnell was working in the company’s shader compiler group. “It was a new thing and an exciting time,” he says. “We were translating human-written shaders into code that could run directly on the graphics processing chip.” Real-time rendering made possible by the GPU opened the door to more realistic images for interactive games and more iterative shading and lighting in postproduction houses. Bunnell pushed his excitement out into the world by writing a chapter for the first GPU Gems book on shadow mapping and anti-aliasing techniques. “It wasn’t a new technique,” Bunnell says. “It was about doing something on a graphics chip in a reasonable number of steps.” Bunnell was more interested, though, in subdivision surfacing, in tessellation that breaks a curved surface into the triangles needed for rendering, and he began working on ways to render curved surfaces in real time. The demo group at Nvidia used his code At top, when Russell holds open Carl Fredricksen’s door with his little for a product launch, and then asked if he could do somefoot in Up, he owes the soft, colored shadow beneath to an award-winning thing more: real-time ambient occlusion. point-cloud-based rendering technique. Above, the diffuse, indirect Ambient occlusion darkens areas on CG objects that illumination also helps make Carl’s storybook house believable. we can see, but that light doesn’t reach—the soft shadow 30
March 2010
The Next Step Meanwhile, at ILM, Christophe Hery had developed a method of rendering subsurface scattering by using point clouds to speed the process. He used RenderMan, which had always diced/tessellated all geometry into micropolygons. “It does this tessellation very fast,” Hery says. “So I wrote a DSO (dynamic shared object) that could export a point cloud corresponding to the tessellation RenderMan had created. My intention was to use it only for scattering, but I learned I could store anything.” Pixar’s Up is one of the latest feature animations to use RenderMan’s point-based In 2004, Hery spoke at Eurographics in Sweden about approach for color bleeding, as evidenced in the image above, but Sony’s Surf’s Up was how he used point clouds for scattering, and in the authe first. More than 30 films have used the technique for VFX and animated features. dience was Per Christensen, who had joined Pixar. “He under a windowsill, for example, or a character’s nose. It is calculated came to me and said that he wanted to implement this in Renderknowing the geometry, not light, and in some ways is self-shadowing. Man,” Hery recalls. And he did. Christensen and the RenderMan The demo group had implemented a version of ambient occlusion team made sure the rendering software could generate a point cloud using notes from Hayden Landis’s SIGGRAPH 2002 course. (Landis, and had the appropriate supporting infrastructures. Everything was his colleague at ILM Hilmar Koch, and Ken McGaugh, now at Double in place for the next step. Negative, received a Technical Achievement Award from the Academy In 2005, Rene Limberger at Sony Pictures Imageworks, where work this year for advancing the technique of ambient occlusion rendering.) on Surf’s Up had begun, saw Christensen at SIGGRAPH. “He asked “The only problem [the demo team] had was that it took about eight me if I would take a look at Bunnell’s article and see if I could implement hours to compute the ambient for a 30-second demo,” Bunnell says. “It it in RenderMan,” Christensen says. So Christensen created a prototype looked good, but it was still an off-line process. Basically, they baked in version targeted to CPUs in a renderfarm, rather than a GPU. the shadows.” “I also extended it somewhat,” Christensen says. Mike [Bunnell] So, with a publication date for a new GPU Gems in the offing, Bun- computed occlusion everywhere first, and then if something realized it nell decided to tackle the problem. And by then, Nvidia’s GPUs were was itself occluded, he would kind of subtract that out. I came up with faster and more programmable, with branching and looping built into a method that I believe is faster because it doesn’t need iterations and it the chip. First, Bunnell created a point cloud from the vertices in the computes the color bleeding more accurately. It’s a simple rasterization geometry. “I created a shadow emitter at each vertex of the geometry,” of the world as seen from each point. It’s as if you have a fish-eye lens at he says. “And, I had each vertex represent one-third of the area of every each point looking at the world and gathering all that light. Developtriangle that shared the vertex. I approximated that area, kind of treat- ing the prototype was quick because the point-cloud infrastructure was ing it like a disk. Then I used a solid angle calculation that tells you already in place.” what percentage of a surrounding hemisphere the disk would obscure Christensen gave Limberger that prototype implementation to if you were looking at that disk. That tells you how much shadow the test. “And, right at the same time, I got an e-mail from Christophe disk creates.” He “splatted” the result, vertex by vertex, onto pixels on Hery at ILM,” he says. “He had the same request. I said, ‘Funny you the screen, adding and subtracting depending on how dark the disks should ask. I just wrote a prototype. Give it a try and give me some were. And then he realized he didn’t need to do that. feedback.’ It would have been unethical for me to tell Christophe that “Instead of splatting, I could make the emitters at each vertex be Rene was testing it as well, so he didn’t know the guys at Sony were receivers,” Bunnell says. “I could go through the list of all these verti- doing similar work. But, Christophe picked it up quickly and put it ces and calculate one sum for that point, and accumulate the result at into production right away.” full floating precision. So, I made the points (where I did the calculaChristensen considers the close collaboration with Limberger and tions) do more than cast the shadow for ambient occlusion—they Hery to have been very important to the process. “They are doing also received shadows from other data points.” film production, so they knew what would be super useful,” he says. And that led to a breakthrough. “Since I had thrown the geometry “They did a lot of testing and feedback, and suggested many improveaway, I could combine points that were near each other into new emit- ments that I implemented.” Pixar first implemented the color-bleedters,” Bunnell says. “So, I would gather four points or so in an area and ing code in a beta version of RenderMan 13 in January 2006, and the use them as an emitter. Then, I had a hierarchy where I combined these public version in May. emitters into a parent emitter, a hierarchy. So, if I’m far enough away “ILM had collaborated with Pixar for years,” Hery says, “but this from the points, I can use the sum, the total of all the children, and I was more.” The two exchanged ideas, feedback, and source-code snipdon’t have to look at all the children; I can skip a whole bunch of stuff. If pets at a rapid pace, on nearly a daily basis. not, I can go down one level, and so forth. I can traverse the tree instead Speed Thrills of going to each emitter that’s emitting a shadow value.” The second breakthrough was in realizing that if he ran multiple Christensen, who considers himself a raytracing fanatic, ticks off the passes, he could get a closer approximation each time. “I could get an advantages this approach has over raytracing. “It’s an approximation, accurate result without looking at the geometry,” Bunnell says. “Then but raytracing is an approximation, too,” he says, “and both of them I realized if I could use this for shadowing and occlusions, I could use will eventually converge to the correct solution.” “The effect is exactly the same,” Christensen continues. “But using it as a cheap light transport technique.” That made indirect illumination—which needs to know about light—possible. And, he wrote the point cloud is faster. Raytracing is a sampling. If you raytrace to get ambient occlusion, you shoot all these rays, count how many hit and about all this in GPU Gems 2. March 2010
31
■ ■ ■ ■
Trends & Technologies
how many miss, and that gives you ambient. If you want color bleeding, you also have to compute the color at those hit points. That involves starting a shader to compute the color, so it’s time-consuming and expensive. With the pointbased approach, you get color bleeding for free. The object [from which you generate the point cloud] already has the color and materials applied, so the point cloud has the appropriate colors built in. You just look up the pre-computed color at that point and you’re done.” Similarly, while displacement mapping slows a raytracer down, the point cloud doesn’t care, which is one reason why Hery wanted to use this method for Pirates. “We saw a threetimes speedup and more for ambient, and
2009 Sci-Tech Oscars The Scientific and Technical Awards, often called Sci-Tech Oscars, are in three levels: Technical Achievement Award (certificate), Scientific and Engineering Award (bronze tablet), and the Academy Award of Merit (Oscar statuette). Of the 15 awards this year, 12 center on tools for rendering, on-set motion capture, and digital intermediates.
Rendering:
Scientific and Engineering Award Per Christensen, Michael Bunnell, and Christophe Hery for the development of point-based rendering for indirect illumination and ambient occlusion. Much faster than previous raytraced methods, this computer graphics technique has enabled color-bleeding effects and realistic shadows for complex scenes in motion pictures. Scientific and Engineering Award Paul Debevec, Tim Hawkins, John Monos, and Dr. Mark Sagar for the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures.
(Top) Davy Jones was the first CG character ILM rendered using point-cloud-based indirect illumination. (Bottom) Double Negative recently used the technique in 2012. probably four or five times faster for indirect illumination [color bleeding],” he says. “It enabled a new look, and we could do Pirates 2 without taking over the whole renderfarm.” Hery, who is now a look development supervisor at ImageMovers Digital, adds, “There’s still no better practical solution for indirect illumination in production for RenderMan-based engines. It’s the best approach for optimizing.” At SIGGRAPH 2009, Christensen finally met Bunnell, the man whose idea led to the scitech awards for the three researchers. “We had exchanged e-mail,” Christensen says, “but, hadn’t talked in person. We had dinner in New Orleans. I was excited to have finally met him in person. The point-based approach is like all great ideas: In hindsight, it seems obvious, but somebody has to think of it. It’s absolutely brilliant.” ■ Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at
[email protected]. 32
March 2010
Technical Achievement Award Hayden Landis, Ken McGaugh, and Hilmar Koch for advancing the technique of ambient occlusion rendering. Ambient occlusion has enabled a new level of realism in synthesized imagery and has become a standard tool for computer graphics lighting in motion pictures.
On-set Performance Capture:
Technical Achievement Award Steve Sullivan, Kevin Wooley, Brett Allen, and Colin Davidson for the development of the Imocap on-set performance capture system developed at Industrial Light & Magic.
Digital Intermediate:
Scientific and Engineering Award Dr. Richard Kirk for the overall design and development of the Truelight realtime 3D look-up table hardware device and color management software. Scientific and Engineering Award Volker Massmann, Markus Hasenzahl, Dr. Klaus Anderle, and Andreas Loew for the development of the Spirit 4K/2K film scanning system as used in
the digital intermediate process for motion pictures. Scientific and Engineering Award Michael Cieslinski, Dr. Reimar Lenz, and Bernd Brauner for the development of the ARRIscan film scanner, enabling high-resolution, high-dynamic range, pin-registered film scanning for use in the digital intermediate process. Scientific and Engineering Award Wolfgang Lempp, Theo Brown, Tony Sedivy, and Dr. John Quartel for the development of the Northlight film scanner, which enables high-resolution, pin-registered scanning in the motionpicture digital intermediate process. Scientific and Engineering Award Steve Chapman, Martin Tlaskal, Darrin Smart, and Dr. James Logie for their contributions to the development of the Baselight color-correction system, which enables real-time digital manipulation of motion-picture imagery during the digital intermediate process. Scientific and Engineering Award Mark Jaszberenyi, Gyula Priskin, and Tamas Perlaki for their contributions to the development of the Lustre color-correction system, which enables real-time digital manipulation of motionpicture imagery during the digital intermediate process. Technical Achievement Award Mark Wolforth and Tony Sedivy for their contributions to the development of the Truelight real-time 3D look-up table hardware system. Technical Achievement Award Dr. Klaus Anderle, Christian Baeker, and Frank Billasch for their contributions to the LUTher 3D look-up table hardware device and color-management software.
■ ■ ■ ■ Stereoscopy
Some preconceptions are difficult to shake. When most people think of cognac, they visualize older gents relaxing in well-worn, overstuffed leather chairs while sipping the liquor from a heavy lead-crystal glass in one hand and an expensive cigar in the other. Perhaps that was an accurate depiction from days of old. But today, the picture is far different.
34
March 2010
And that is just what Maxxium wanted to show. When Maxxium, which owns Courvoisier cognac, wanted to update its brand and reconnect the spirit of Courvoisier with its cocktail heritage, it hired integrated marketing agency White Label to do just that. With the help of production company Atticus Finch, they set out to shatter that image—literally and metaphorically—in an advertising spot that was shown in stereoscopic 3D.
Stereoscopy
After discovering that a broadcast network in the UK was devoting a week of 3D TV programming to stereo and was looking to air advertising spots in stereoscopic 3D, White Label suggested that Maxxium present the cognac in another dimension. Maxxium was sold. The idea pitched to Maxxium was a contemporary spot that would broaden Courvoisier’s appeal outside the stereotypical cognac consumer as a spirit served with a mixer. The focus would be an exploding brandy balloon that reforms into a cocktail—a simple, straightforward visual presentation. Accomplishing it, however, was less than simple. And the only way to maintain the necessary control over the explosion and fluid would be through the use of photoreal computer graphics. “The project was new and exciting territory for us all: our first TV spot, the first UK TV commercial for Courvoisier, and broadcast on the first-ever night to screen 3D advertising on terrestrial TV,” says White Label’s creative director Greg Saunders. The ad agency approached Atticus Finch, known for its animation work on videos for the Northern Irish alternative rock band Snow Patrol. Excited about the project, the production company signed on.
Breaking the Model When Atticus Finch decided that the best route for the commercial would be entirely CG, the production company looked at doing all the work in-house, says Chris Richmond, the facility’s creative director. But soon after, Atticus Finch brought on Spectre VFX for the CG creation, having partnered with that studio on prior projects. First, Richmond created a static previsualization to communicate Atticus Finch’s vision for the spot that he subsequently handed over to Spectre. A basic animation test within Autodesk’s Softimage provided a taste of the type of movement he wanted for the camera and helped establish the timing of the piece. Working within Autodesk’s Maya, the group
n n n n
Atticus Finch, along with Spectre VFX, modeled a cocktail glass in Maya, and then shattered it using both Maya and Blast Code. The liquid was simulated using RealFlow. at Spectre modeled all the props, including the cocktail glass and bottle, and textured them, taking care with the caustics, which proved tricky because of the transparent glass. Then they were shattered into tiny pieces. The artists created the simulation of the explosion using a combination of Maya and Blast Code’s demolition software, a plug-in to Maya. “It took a number of iterations to get the glass simulation correct,” Richmond points out. The liquid, meanwhile, was simulated using Next Limit Technologies’ RealFlow. Once again, several simulation attempts were needed before the fluid performed the way the director and client wanted for the desired look. To render the images, the group used Mental Images’ Mental Ray within Maya. The spot called for a great deal of rendering passes (highlight, spectral, ID, motion vector, and so forth) that then doubled for the stereo.
Providing Depth The models were composited into a black background with The Foundry’s Nuke. According to Richmond, Nuke proved an ideal compositing solution, not only because of its well-developed stereo workflow, but also due to the large amount of rendering that resulted. “We were doing motion blur but only had a prime-time budget, not a feature-film budget, so we couldn’t go to a renderfarm,” Richmond says. “We did tests for a motion-vector pass, and ended up doing motion blur in Nuke rather than at the point of rendering in Maya to save time, and it worked well.” The artists also added lighting flares to give the CG atmosphere against the black background that was used in the spot. Because the action occurred in black space with no foreground or background objects, it was difficult to mark the Z-axis depth for the stereo. In terms of the stereoscopy, Spectre used
Maya’s built-in stereo camera rigs, with each placed 64mm apart, the average separation of the human eye. “Getting the lenses and cameras set up to get the best stereo effect took a bit of experimentation. There are limits to where the effects work or don’t work,” says Richmond.
Cheers! From his point of view, the most challenging aspect of the project, maintains Richmond, was getting the glass fragments to look right with the lighting and the fluid to move in a way that was satisfactory to everyone involved. Another consideration was the fact that the commercial had complex camera moves but retained a “one-shot” look so that the eye did not have to readjust to new scenes. To review the stereo effects, the group used anaglyph glasses. For the broadcast, the group delivered the rendered material to a company for Colorcode encoding, to make it stereo-ready. The commercial also was rendered out in HD and shown in theaters as a preview before Avatar in the UK. In both instances, the audience members, wearing 3D glasses, see a tranquil glass of cognac explode, with glass splinters and liquid flying toward them, before spinning into a vortex that forms a cocktail sitting next to a bottle of Courvoisier. “We challenged White Label to deliver an innovative campaign to drive reconsideration of Courvoisier as a mixable spirit, and felt the 3D TV commercial opportunity was a great fusion of message and media,” says Eileen Livingston, Maxxium’s marketing controller. “Clearly showcasing Courvoisier as a cognac with another dimension is also a perfect fit with the revolutionary spirit embodied by the brand for the past two centuries.” n Karen Moltenbrey is the chief editor of Computer Graphics World. March 2010
35
■ ■ ■ ■
Business
M
iddleware targeted at the electronic entertainment industry has evolved in recent times from a set of products focused on helping game developers automate graphics and animation to a full suite of products for automating tasks ranging from graphics to character intelligence. Meanwhile, the companies making middleware for interactive entertainment have broadened their reach to include many more platforms outside of game consoles and PCs, including mobile phones, PDAs, and set-top boxes for interactive television (ITV), and some are now reaching into new markets, such as Blu-ray disc players and automation in the film industry. According to findings in Acacia Research Group’s “Middleware for Interactive Entertainment 2009,” spending on game middleware will continue to be slow in the first half of 2010, with publishers and developers remaining reticent about spending. However, spending will pick up when the video game pipeline revives in the second half, when investment in new projects returns. As nervous as they are, developers and publishers will need to have new games in the pipeline. They cannot delay development indefinitely. Spending in the ITV segment also will remain slow for the first half of the year based on the saturation of the digital set-top box upgrade market in mature markets and lower price points in emerging digital television markets. Mobile middleware spending will increase as the demand for smart phones with much more sophisticated hardware make gaming and other entertainment applications more accessible on mobile devices. Overall spending on middleware for electronic entertainment will increase at a compound annual growth rate of 9.3 percent, from approximately $1.3 billion at the end of 2009 to just over $2 billion at the end of 2014.
Console/PC It is no secret that the electronic entertainment industry has suffered along with the broader economy over the past 18 months. The slowdown in the video game business has had a ripple effect, making it much harder for industry suppliers to generate revenue growth. Unfortunately, it will take some time for videogame developers and publishers to increase spending on middleware products. The outlook for spending on game middleware is sketchy for the first half of the year. The lull is primarily driven by cancellations and delays experienced in mid- to late 2009 in developing new games for consoles and PCs. It does not help matters for middleware pro36
March 2010
Trinigy’s Vision game engine can be used to create entertainment for a range of platforms. viders that the industry is mid-cycle between console launches; this, coupled with reluctance to invest heavily in game development, has left the pipeline relatively dry. While both Sony and Microsoft are releasing new controller technology in 2010, which will require new games that take advantage of both the Motion Controller and Natal, respectively, the lead-up is still nothing like a console launch. These delays mean that the growth in spending on middleware will remain slow through 2010. If new consoles are delayed longer than their typical seven-year launch cycle, that will push a return to fast growth even further out as game developers wait to invest in new technology until they have access to development programs for new consoles. Casual gaming is having a huge impact on console and PC gaming. Earlier in the decade when casual gaming was taking off, the theory of most game developers and publishers was that casual gaming would be additive. Core revenue from the hard-core immersive gaming sector would remain the bread and butter of the industry, and casual gaming would add to that revenue. In an economy where consumers walk away from a $60 console or PC game and eagerly buy up a $10 casual game, developers and publishers are looking at casual gaming to generate revenue that previously came from hard-core games. Casual game revenue was not supposed to replace hard-core game revenue, but for many publishers it has. This has left middleware out in the cold in many cases. Casual games just do not need all the bells and whistles of a hard-core game, the sector where most middleware providers have been focused for a long time. However, the news is not all doom and gloom. There are several things on the horizon that will push game middleware back into the kind of growth territory expected of nascent technology industries. At some point, game console makers will launch new platforms. The new platforms may not be developed on the typical launch schedule, but the advances
Great Content: It’s more than what you do — it’s who you are. Take your passion for creating great content to the next level at the NAB Show,™ the ultimate destination for solutions to help bring your vision to life.
The NAB Show’s unparalleled exhibit hall showcases virtually every innovation driving production, editing, animation, gaming, widgets, social networking and more for the big screen, small screen to no screen and beyond.
Your inner creative will appreciate the wealth of hands-on educational opportunities presented by the producers, directors and cinematographers influencing today’s edgiest content. Your sensible outer self will be overwhelmed when you see, touch and test the multitude of technologies enabling HD and 3D.
Evolve, innovate and grow, smarter than ever before at the NAB Show. Come to exchange ideas and strategize new directions. To join this international community of broadercasting® professionals, visit www.nabshow.com.
(millions)
March 2010
$1000 $300
2014 2014
2012
2013
$0
(millions)
38
2013
$50
Spending on Middleware for Mobile Devices
$800
2012
2014
2013
$100 $0
2012
$100
2011
$300
2010
$150
2009
$400
2011
$200
$500
$200
2011
$250
$600
2010
$700
$300
Spending on Middleware for Mobile Devices 2009-2014
2009
$800
Spending on Middleware for Video Game Consoles & PCs 2009-2014
2010
2009
more than a decade, wait- chip makers, such as Nokia and Qualcomm, ing for graphics accelera- among others, provide development technoltion, memory, and power ogy for free. The point is to encourage the management to advance development of applications and content for to the point where gam- Nokia and Qualcomm technology. Middleing made sense on a ware makers have to provide solutions that dehandset. velopers cannot get through Nokia’s developer Now, most mobile network, Qualcomm’s BREW, or Sun’s Java handsets are more than suite of products. The key to competing succapable of handling base cessfully with these mobile technology giants 2D games, and a signifi- is that these providers are not in the entertaincant portion of handsets ment business and oftentimes the solutions are now ready to take on they provide are not necessarily optimized for 3D gaming, such as that entertainment. Voice technology is the bread being enabled by Unity’s and butter of the mobile industry. These free iPhone middleware. solutions do not always take into considerUnity Technologies’ game engine is just one of many integrated Mobile entertainment ation issues that are important to gamers, such authoring tools for creating 3D video games and other interactive conmiddleware as a business as controllers, latency, and plain fun—areas tent, such as architectural visualizations or real-time 3D animations. is relatively young. Many where game technology providers can differin PC technology will leave game consoles be- companies and technologies have been around entiate themselves. hind, and gamers will begin to demand that for less than a decade. This means that the inAs game developers and other content profaster, better machine. The economy will get dustry is still in the early stages of developing viders look for cheaper ways to develop games, better, as well. For the game middleware mak- successful business models. Experimentation mobile content will continue to look attracers that have invested wisely and are spending with models ranges from hosting and portal tive. When confronted with spending millions the hard times developing new technology, services where technology providers take a of dollars on the next console game, compared there will be pent-up demand that will result piece of each transaction, to stand-alone, off- to hundreds of thousands of dollars on the Spending on Middleware for in new seats and subscriptions when gamers the-shelf game engines at a per-game price. next mobile game, many will opt for mobile. Video Game Consoles & PCs start buying titles en mass again. Spending on Middleware makers looking$800 at the mobile mid- The iPhone represented the tipping point for 2009-2014 game middleware for console and PC titles will dleware market from the game or ITV industry mobile entertainment, but it is just the tip of $700 increase at a compound annual growth rate of have to be prepared for a completely different the iceberg: with Android finally providing $600in the mobile true competition, mobile entertainment will 12.4 percent, from $413 million at the end of market. Development budgets 2009 to $740 million at the end of 2014. game industry are a fraction of what they are explode. This will push spending on middle$500 in the PC or game console industry. Selling a ware for mobile entertainment to a compound Mobile solution that is hundreds of$400 thousands of dol- annual growth rate of 10.6 percent, from apThe mobile industry was one bright spot lars will not fly in this segment, $300so middleware proximately $178 million at the end of 2009 in the overall economic picture throughout providers must be innovative in developing a to $294 million at the end of 2014. $200 2009. In particular, the smart phone segment large community of users and selling product $100to what can be ITV outpaced most expectations driven by the at a very low price point relative iPhone. According to Gartner research com- charged in the console and PC$0game develop- Digital set-top boxes for interactive television have finally become powerful enough that true pany, third-quarter sales of smart phones were ment industry. (millions) up 12 percent in a global economy that was Middleware makers in this segment must interactive applications are now being develstill struggling. Middleware makers have been also compete with a very helpful mobile tech- oped for many pay television services. Enterkeeping tabs on the mobile game segment for nology industry. Handset manufacturers and tainment applications, from karaoke to casual
Source: Acacia Research Group
Spending on Middleware for Interactive Television 2009-2014
3
2011
2012
2013
2011
2012
2013
$600
(millions)
2013
2012
2011
$0
2010
$200
2009
$400
2014
2010 2010
$800
faces and applications. While traditional game middleware manufacturers are not going to suddenly try their hand at ITV middleware, the market does present opportunities for partnerships and technology licensing. Television is changing dramatically, and game middleware providers have been enabling more immersive interactive entertainment for a long time. As TVs become more interactive, the opportunities will become more obvious, such as interface, font, voice, and even graphics technologies that can make the crossover. These are just a few of the types of technologies long used in game middleware that could find a home in ITV through partnerships and alliances with set-top box, television, and ITV middleware makers. Spending on middleware for interactive television will grow at a compound annual growth rate of 6.9 percent, from approximately $701 million at the end of 2009 to $980 million at Percent of Mobile Phone the end of 2014. Installed Base 2D vs. 3D Overall, the market for entertainment middleware will suffer from economic problems well into 2010, but that will create pent-up demand that will return toward the end of the year and in early 2011. Those companies that have invested wisely and are using this time to develop new, compelling 3D 2D technologies for interactive entertainment developers will be best poised to attract new and returning customers later in 2010 and early 2011. ■
2014
2009
Spending on Middleware for Interactive Television 2009-2014
A big growth area for gaming middleware is the mobile market, as a number of handsets tackle 2D and 3D gaming, including LG Dacom with its karaoke application.
2014
$1000
2009
gaming, are proliferating in markets such as South Korea and the UK. This segment has been tough for middleware makers. By the early 2000s, it had become clear that middleware was not going to generate enough immediate revenue to be a stand-alone business. Pay television operators were not interested in anteing up additional costs for software in settop boxes, and it would be a few more years before these operators would see the value in middleware. Now, nearly a decade later, the middleware market for cable and satellite in mature pay television regions is dominated by two companies: NDS and OpenTV. Neither of those companies are pure play middleware providers. Each company has diversified its product line to include back-end or application development to ensure cash flow. The Spending on middleware Middlewareis for fierce competition in ITV in Video Game Consoles & PCs emerging $800 cable, IPTV, and satellite markets, 2009-2014 where $700companies, like Alticast, have seen increasing success and familiar names, such as $600 Microsoft, are finding new customers. IPTV $500 operators are using advanced applications as $400 a competitive edge over traditional cable and $300 networks. satellite The primary driver behind growth in pay $200 television $100 subscriptions over the past decade has been $0 the move to digital networks. This (millions) has allowed set-top box manufacturers and middleware makers to survive harsh economic times that have destroyed many businesses. More mature markets, such as the US, are well into Spending the transition, with emerging on Middleware markets in India and China now presenting for Mobile Devices $300 opportunities 2009-2014 for new growth, although at $250lower price points. much For middleware companies that have tradi$200 tionally focused on game consoles or PCs as $150 their market, the ITV segment does not bring to mind obvious opportunities, but set-top $100 boxes are now becoming powerful enough $50 to handle more sophisticated graphics, and consumers in markets outside the US are be$0 ing treated to much more sophisticated inter(millions)
Source: Acacia Research Group
35%
65%
Christine Arrington is a principal and senior analyst at Acacia Research Group. She can be reached at christinea@ acaciarg.com. March 2010
39
■ ■ ■ ■
Web Gaming
ABOVE
PAR The online World Golf Tour game lets players simply enjoy a round of golf or play competitively in tournaments for prizes.
40
March 2010
The origin of the game of golf is unclear. Some say that its roots date back to the Romans, others to ninth-century China. And then there are those who find parallels to games played in England, France, Persia, Germany, and The Netherlands. For the most part, though, the majority of folks accept Scotland as the birth place of this sport, with 12th century shepherds hitting stones into rabbit holes. No matter the beginning, golf today is embraced by fans in nearly every corner of the world. For some, it is a competitive sport, dominated by players such as Tiger Woods, Arnold Palmer, Jack Nicklaus, and Phil Mickelson. For others, it is part of conducting business. For the majority, however, it is a leisure activity. Yet, not everyone has the time or money to play a round of golf weekly, let alone daily. Memberships are expensive, tee times difficult to obtain, and playing 18 holes can be time consuming as well as exhausting. Thanks to CG technologies, however, golf enthusiasts can eliminate those obstacles and get their fix—albeit in digital form—whenever they like. Computer golf games first popped up on the scene about 25 years ago. Since then, they have evolved from the pixelated look of Accolade’s Mean 18, through the series of Microsoft Links, to the realistic Tiger Woods PGA Tour from EA Sports. Some developers have even taken the sport online. The inherent challenges of creating an Internet title, however, have kept the look of Web-based golf games well below par. World Golf Tour has found a way out of the
n n n n
Web Gaming
Web sand traps that have handicapped other Internet golf titles, and using a host of tools within the Adobe Creative Suite, has built a game that features photorealistic courses modeled after some of the world’s most prestigious golfing locales. Founded three and a half years ago, World Golf Tour’s strategy has been to create an online sports destination, starting first with golf. “In sports, there are always those who play professionally and the huge fan base that wants to get involved by somehow participating and following the action,” says YuChiang Cheng, CEO of World Golf Tour. “Fantasy sports is a very large market, and sports games on consoles have been successful, as well. We view the next step as the creation of large, online communities where people can play sports together.” The company spent the first two years in R&D, trying to figure out how best to achieve its two main design goals. First, the game had to be visually compelling. “There is a visual standard that people are drawn to; games on consoles like the Xbox 360 and PS3 are really beautiful,” says Cheng. “We knew that if we were fighting for a person’s time, we needed to be as compelling visually as games on those platforms.” To tackle that issue, World Golf Tour’s chief scientists and engineers believed they had found a picture-perfect solution by using high-resolution photographs. Says Cheng, “We made a mental leap. If you want the imagery to look beautiful and photoreal, why not start with photographs and work your way backward?” And that is what they did. In fact, the team merged a few techniques for its big-picture solution, including the growing concept of generating a real-world setting in 3D by stitching together large amounts of satellite imagery, aerial photos, and more—a la Google Earth and Microsoft Photosynth. Second, the title had to be easily accessible for a mass audience; thus, the game had to be online and playable without requiring special plug-ins to be downloaded or a large software install—traditional obstacles for online game companies.
On the Green When re-creating the greens for a specific course, the World Golf Tour team begins by gathering data that literally gives them the lay of the land. Using aircraft and helicopters, and armed with various surveying equipment, the group laser-scans the entire course. The resulting 3D point cloud is then transformed into an Autodesk Maya 3D model, on top of which the photo textures are placed. Typically, the crew acquires more than 42
March 2010
100,000 high-res photos of the entire course from all angles, while a proprietary system records the exact location of each shot and matches it to the virtual camera view during the world construction. However, dealing with so many pictures can be unwieldy. So, for importing, processing, and managing the plethora of pictures, the crew turns to Adobe’s Photoshop Lightroom. Not every photo, however, is picture-perfect; after all, the pictures are acquired from realworld sources. Thus, the group uses Photoshop CS4 Extended to clean up and color-correct
on the player’s input. While the courses are the main attraction, the game also features player avatars, which are modeled, lit, and textured in Maya, then exported as sprites and modified within Adobe’s CS4. According to Cheng, enhancements to the JavaScript API in Flash CS4 Professional helped simplify the process of piecing together avatar parts, while improvements to the blend meshes made lighting the avatars easier. World Golf Tour currently features a number of courses, including: The Old Course at St. Andrews Links, Kiawah Island Re-
World Golf Tour’s digital courses are realistic representations of actual locales. The crew laser-scans the sites, creates a 3D model from the point cloud, and adds photographic textures atop the geometry. the imagery, such as smoothing out divots in the grass and removing unsightly power lines. Lightroom is used for color-balancing the imagery, after which complex scripts are used to automatically tweak the photos, giving them a consistent look. To determine how the ball reacts when “hit,” the group runs various tests and simulations, gathering real-world data from impact and collision models, and then feeds that information into the company’s proprietary physics engine. The engine calculates the corresponding ball velocity and angle of flight with each player’s stroke, determining how the ball rolls and collides with the various surfaces. “Our greens are extremely accurate. Our contours are within 1.5 inches of accuracy, so when you roll the ball across the green, it acts and looks like it does in real life at that course, which is really important for a golfer,” says Cheng. In addition to the physics engine, the title employs Adobe ActionScript 3 programming to provide unique gameplay responses based
sort, Pinehurst #8, Wolf Creek, Edgewood Tahoe, Bethpage Black, and Bali Hai. Realistic effects, such as adding sunshine to the Ocean Course at Kiawah, are added through the use of the 3D transformations and the bones tool in Flash Professional.
In a Flash To tackle the second challenge, accessibility, World Golf Tour chose to build the game on the Adobe Flash platform. “Flash turned out to be an excellent game engine and display client,” explains Cheng. “Its flexibility allowed us to merge our course photos with the 3D geometry, and that is really what made our company happen.” And integration is done, well, in a Flash. The World Golf Tour team uses Creative Suite 4 Web Premium integration capabilities to directly import the files from Photoshop Extended into Flash Professional. Prior to World Golf Tour, the closest online golf game in terms of experience and aesthetic
Web Gaming
n n n n
Customized Power The Mac Pro Workstation from Safe Harbor Computers
The multiplayer online game was built using a range of Adobe products, including Flash. required an 800mb download. “You’d start the process and hope that when you woke up the next day [the file] transferred without problems,” Cheng says. So, why choose Flash to reach the massive multiplayer online audience? The answer, says Cheng, was simple: The numbers added up. The distribution of Flash is extremely wide; at last count, Adobe estimated the installed base at more than 98 percent of Internet-connected computers worldwide. Because Flash is already installed in most browsers, there is no need for players to download plug-ins, which is often a turn-off for potential audiences. “Adobe Creative Suite Web Premium software offers us a comprehensive and integrated set of design and development tools,” says Cheng. “We couldn’t have achieved the same quality, realism, and reach without it.” Moreover, the ability to develop quickly on the Flash platform, with its simple scripting language, enables the group to handle some very complex situations. Flash, maintains Cheng, is what facilitated the creation of the World Golf Tour games—as is the case with nearly all social games. “Social games could not have happened if it weren’t for Flash,” Cheng adds. A total of six million rounds of World Golf Tour were played in the first six months of the game’s inception, with the average player spending 38 minutes and completing 10 to 12 rounds of virtual golf on the site each week. It is now a popular gathering place for golf enthusiasts who want to simply play a round or those who want to up the ante by competing in tournaments for prizes. It has also become a destination for actual golf vendors—including the USGA, PING, SkyCaddie, and TaylorMade—to advertise their real-world wares, and for players to test out new equipment virtually. No one expects virtual golf to take the place of the real thing, but an experience like World Golf Tour shows that the grass can be almost as green on a computer screen as it is on the local fairway. n Karen Moltenbrey is chief editor of Computer Graphics World.
Let Safe Harbor Computers help you design your perfect graphics workstation. With 22 years of experience, we will help determine your needs, suggest options to optimize your productivity, quote ONLY the components you require, then assemble, test and deliver to your door a powerhouse you’ll truly appreciate and enjoy! Configured with your choice of hardware and software for maximum speed and efficiency, a Mac Pro from Safe Harbor Computers is the professional’s first choice for 3D graphics and animation. Maximize your productivity with 64-bit 8-Core processing power and up to 32 GB of memory. With the optional NVIDIA Quadro FX 4800 display card, make your machine perfect for motion graphics, 3D modeling, rendering, or animation.
Adobe® Photoshop® Lightroom® 2
Visit www.sharbor.com to explore the possibilities and get an instant quote, or call us for assistance. Safe Harbor Computers is ready to get you in front of your new Mac Pro workstation!
Vue 8 xStream
Cinema 4D R11.5
Vue 8 xStream offers professional CG artists a complete toolset for creating and rendering exceptionally rich and realistic natural environments, using integrated operation with 3ds Max, Maya, LightWave, Cinema 4D and Softimage applications.
Easy to use professional 3D animation software, with modular expandability to suit both artists and industry. Smooth and interactive feedback will let your creativity run free. Multicore rendering is now faster than ever! Includes BodyPaint 3D R4 for texture painting.
Maxwell Render™ V2
ZBrush® 3
Cintiq 21UX
A physically correct, unbiased rendering engine, capable of simulating light exactly as it behaves in the real world. Render once! Then, adjust lights in realtime with Multilight™. Plug-in interface works with a large range of 3D and CAD applications.
A digital sculpting and painting program offering powerful features and intuitive workflows. All of the tools needed to quickly sketch 2D or 3D concepts and take the idea all the way to completion. With the ability to sculpt up to one billion polygons, creation is limited only by your imagination.
Creative pros can work naturally and intuitively directly on the surface of the large-format 21.3" touch screen with 2048 levels of sensitivity. Ambidextrous design features sixteen programmable ExpressKeys plus two Touch Strips. New Tip Sensor in the pen captures subtle nuances of pressure.
The essential application for digital photography workflows. Improve productivity with tools for import, enhancement, management, batch processing, and more efficient print output. Includes powerful new local adjustment brush feature.
800-544-6599
www.sharbor.com SOLUTIONS FOR GRAPHICS PROFESSIONALS Safe Harbor Computers W226 N900 Eastmound Dr. Waukesha, WI 53186
800-544-6599 / 262-548-8120 Information/Orders 262-548-8157 Tech Support/RMAs - M–F 9am–4pm CST 262-548-8130 Fax
Terms: POs accepted from schools and government agencies. • All checks require 7–10 days to clear. • Defective products replaced promptly. RMA number required for all merchandise returns. Returns accepted within 20 days, in original packaging, postage prepaid, undamaged. Opened software not returnable. Shipping charges not refundable. Returns subject to a 15% restocking fee. • Not responsible for typos. Prices subject to change. © 2010 Safe Harbor Computers. All rights reserved.
March 2010
43
■ ■ ■ ■
Augmented Reality
ith the skill of a skater executing a triple Lutz, three companies— Yahoo, Total Immersion, and Helios—created a technically innovative augmented reality (AR) application for fans and athletes at the 2010 Olympics. “We probably developed the project in 30 days,” says Greg Davis, North American general manager for Total Immersion. The goal? “We wanted an innovative, fun way to make an audience aware of Yahoo features in a mobile phone,” says Barbara O’Connor, vice president of Yahoo’s global consumer marketing. Helios installed the interactive AR experience outside Yahoo’s “Fancouver” venue in downtown Vancouver, British Columbia. Total Immersion created the experience—actually, two experiences—using its D’Fusion software. Both experiences relied on 46-inch, thinbezel Samsung monitors installed in “windows” on the outside walls of the Fancouver building. “You can see that they’re fabricated, wrapped in vinyl,” says Davis. Inside, powering the experiences, are dual-core Pentiumbased “black boxes” equipped with AMD ATI 4870 graphics cards. For the first experience, News, Weather, and Sports, a prosumer Canon HD video camera points down from above the display window. When people walk up to the display, they see three options: If they stand on the left, a fedora pops onto their head and a larger-than-life mobile phone shows a news feed with the latest sport scores. Stand in the middle and they see a weather report while decked out in appropriate gear—an umbrella hat, sunglasses, or wool hat. Stand on the right, and they’ll find themselves wearing a baseball cap with a country flag and
44
March 2010
looking at a live feed with the most current medal count. To do this, Total Immersion’s D’Fusion software runs image-recognition algorithms on the incoming video stream to search for a face. “When it finds the face, it knows it must place a digital object relative to the target,” Davis says. And, it knows whether to place the correct objects in the left, middle, or right screen. “It’s tracking the X axis,” Davis says. “So as the person moves, we can put virtual environ-
hats, things that stay close to the face. We’re using the face to track and put things on it, but we’re also putting things around the face. As someone moves left or right, we can have things move around them.” The second experience was a snowboarding AR video game. Here’s how it worked: Yahoo had people stationed near the venue handing out cards. On one side were instructions about how to connect to Yahoo mobile; the other side was an interaction device. When people
This visitor at Yahoo’s Fancouver venue during the 2010 Olympics knows the weather report calls for snow because a woolen cap popped onto his head in augmented reality. ments around them that are reactive. For quite a while we’ve been stuck with little interpretations of face tracking and seeing the same execution over and over again—glasses, beards,
walk up to the screen, it’s as if they’re looking into a mirror, but, as in the first experience, it’s actually a video projection from the HD camera mounted beneath the screen.
e , . e
g o g t r e
g , D
“When someone holds the card with the snowboard up to the window, a mobile phone spins down from the top of the screen, with a snowboard character stuck to it,” Davis explains. “The character lands on the card. Once the system is initialized, that is, once it has successfully tracked the card, that window shrinks down to the center bottom of the screen. All around is an immersive video game.” The viewer is now in the video game, controlling the movement of a snowboarding character with the card. Viewers can see themselves steering the character in the little window at the bottom. If the system loses the tracking, that window expands until the software tracks the card again, and then it shrinks back down. “We’ve created experiences before where you see a tracking object and a 3D character attached to it doing some kind of animation,” Davis says. “No big deal. What is different with this experience is that there’s a game engine behind it. The 3D character is still attached to a tracking object, but we’re incorporating the character into a video game.” Two teams of 3D artists, engineers, and project managers at Total Immersion developed and delivered the experiences, with a separate team working on incorporating the real-time feeds. The 3D artists used Autodesk’s Maya. All the other software is proprietary. Helios handled the logistical support, hardware, installation, and monitoring of the application. “We’re constantly making sure the system is running perfectly,” says Jon Fox, chief creative officer at Helios, “including the realtime uploads.” Helios had a crew on the ground for the installation and for monitoring at first, but soon switched to remote monitoring using an Internet connection. Yahoo considers the experience a great success. “We were thrilled with it,” says O’Connor. “People got to experience a product demo without [Yahoo] doing a product demo. We even saw people taking pictures of their friends wearing the funny hats and interacting with the experiences. It was terrific.” ■
Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at
[email protected].
TS of Today udenTS The STuden are InduST nduSTry LeaderS eaderS of Tomorrow...
Our industry is shaped by creative minds who never stop thinking, never stop learning, never stop moving the industry forward.
2010 Education & Recruitment Special Edition: Coming in the July issues of CGW and POST
Never-Ending Story. Artists never stop improving their skills. There are always new tools and techniques to master. We will be looking at ongoing training, online courses, training sites, DVDs, books and more. School Showcase. Whether you are looking for the best sound editing, animation CG, post production, mocap facilities or on-the-job learning, it’s all on display. Job Outlook 2010. We will investigate how head-hunters and career placement services can help you, whether you’re starting out, or ready for a move. Don’t miss this opportunity to display your school, products or job openings in this special issue! Bonus Distribution: SIGGRAPH, COMIC-CON, IBC HIGH SCHOOLS AND LIBRARIES ACROSS THE US (VISUAL ARTS CIRRICULUM SCHOOLS)
RESERVE SPACE TODAY: LISA BLACK
[email protected] or (903) 295-3699
March 2010
45
By George Maestri
Sculpting
ZBrush
P
ixologic’s ZBrush has always been a little ahead of its time. The tools that allow you to create geometry from scratch within ZBrush. One of the most interesting of these tools is called ZSketch, which software pioneered the area of interactive 3D sculpting many years ago, and over the subsequent years, the software has been provides a digital equivalent of clay modeling techniques. Typically, used to create highly detailed and realistic models in games, film, and sculpting in clay involves creating a wire armature to which clay strips illustration. In addition to the offering’s 3D sculpting power, ZBrush are added, building up the model from scratch. ZSketch allows you to also can create 2D illustrations as well as “2.5D” images, which are made by painting with depth. The new version of ZBrush adds a number of tools that make modeling and painting much easier. ZBrush works on both Windows and Apple’s OS X. Upon launching the software, you’re given an option to load one of several stock models, import your own model, or proceed to a blank canvas. Once you are past this menu, the interface appears. The interface is attractive and has a large workspace in the center, with tools arrayed on either side. Pixologic tends to march to its own drummer when it comes to interface design. The menu structure is different from most applications, and the company tends to use its own terminology. This, however, is not a huge hindrance, but it does take a little while to get used ZBrush uses a brush interface that allows for highly detailed sculpting on models. to ZBrush’s unique workflow. The basic workflow is brush-based: You either brush onto a blank paint “strips” of geometry that can be attached to a ZBrush skeleton, canvas or an existing model to add or subtract detail. You can paint in much like an armature. ZSketch also can be used without an armature 2D, much like in any paint package, but the real power comes when to model free-form. This process is driven by ZBrush’s ZSpheres techyou paint with geometry. Brushes can contain any geometric shape and nology, which uses the brush strokes to create the underlying geometric be used to stamp or pull the surface of a model to add detail. Brushes, mesh. Spheres II, the new version, allows for branching structures— however, can go a lot deeper than simple pushing and pulling; they can such as hands, fingers, and limbs—to be created. Another nice modeling improvement is called Unified Skin, which actually invoke macros and other high-level functions to create very specific effects. A cloth brush could easily transform the surface of an gives you the power to control how ZBrush creates a mesh. The upobject to canvas, for example. Another brush paints stitches for seams dated version of Unified Skin greatly improves the ability to create edge in clothing or for stitching up scars on the skin of a character. The pos- loops around specific areas of the mesh, as well as create smoothing sibilities with Pixologic’s technology are very broad and allow for a high groups. Simply put, these new features allow you to sketch in 3D to your heart’s content, and then be able distill those creations down to degree of creative freedom. The most common use of ZBrush is for sculpting in 3D. Typically, models that can be used in other 3D applications, such as assets for ZBrush has been used as a finishing tool, with the basic geometry film, games, and so on. In addition to creating and sculpting geometry, ZBrush can be used as modeled elsewhere and the final touches added in ZBrush. The a paint package, either painting on a canvas or a 3D model. Not only can software’s ability to handle very you paint color and texture, but you can also paint materials on a model. ZBrush large models allows for a high Of course, when painting on a 3D model, you need to be able to map $595 degree of detail to be added. This the brush strokes to the model effectively. The new version of ZBrush Pixologic workflow, however has been slow- helps out in this area by revamping the UV editors, which control how www.pixologic.com ly changing with the addition of textures are placed on a model. You can also continued on page 48
ated
46
March 2010
3D Textures HDRI
Viz-Images Movie-Clips Stock Media
News-Viz Audio
Dosch 3D: Transport 2010
Dosch 3D: Skyscrapers V2
Dosch 3D: Concept Cars 2009
Dosch 3D: Human Anatomy
Dosch 3D: Human Anatomy - Skin
Dosch HDRI: USA Road Backplates
Dosch 3D: Jewellery & Watches
Dosch HDRI: City At Night
Dosch HDRI: Industrial Backplates Vol. 1
Dosch HDRI: USA City Backplates
Dosch Textures: Plants & Nature V2
Dosch Viz-Images: People - Outdoor Activity
Dosch 3D: Media Packaging
Dosch 3D: Buildings V2
Dosch 3D: Space
3D-Design, Animation, Visualization
Besides complete 3D-models and scenes Dosch 3D, Dosch Design products contain surface materials Dosch Textures, High Dynamic Range Images Dosch HDRI, as well as 2-dimensional objects for architectural visualizations Dosch Viz-Images. Animated movie sequences Dosch Movie-Clips, plus quality music and sound effects Dosch Audio complement this product segment.
Download free Dosch Spots 02 plus many samples! www.doschdesign.com
[email protected]
Graphic Design, Desktop-Publishing, Webdesign
Dosch Stock Media offer a comprehensive collection of design ‘templates’ which are provided as PhotoshopTM (.psd) layer images.
DOSCHDESIGN.COM
We support your creativity.
DOSCH
Sample
DESIGN
r DVD
For additional product news and information, visit CGW.com
Review SOFTWARE Digital Characters Personalized 3D Daz 3D and Gizmoz have partnered to create an online marketplace for high-quality, inter-compatible digital characters and accessories. The new company will deliver digital goods with lifelike characteristics for use by artists of any level working on social networks, cross-platform gaming, 3D animation, and development. Gizmoz’s photorealistic head reconstruction and online personalization service combines with Daz 3D’s full-figure content, desktop software tools, and community to provide creative professionals, gamers, and consumers with a virtual goods design center and marketplace. The custom avatars can be managed and seamlessly transported to virtual environments or production pipelines. In fact, all characters and technologies can be used in PC and console games, social networks, video clips, and mobile applications, as well as in professional modeling, animation, and illustration projects. The merged company plans to unveil its first new products this quarter.
WIN
Daz 3D; www.daz3d.com Gizmoz; www.gizmoz.com
Rotoscoping Mocha Upgrades Imagineer Systems has upgraded its planar tracking and rotoscoping software, Mocha and Mocha for After Effects. Mocha Version 1.6 and mocha for After Effects Version 2 newly support tracking data export directly to Red Giant Software’s Warp plug-in for After Effects, enabling a simplified workflow. Mocha for After Effects, a stand-alone 2D tracking
Nash.
tool based on Imagineer Systems’ 2.5D Planar Tracking technology, helps artists generate solid tracks; produce position-, scale-, rotation-, shear-, and perspectivematched tracks; and export data to After Effects. Now available, Mocha Version 1.6 is priced at $1095. Mocha for After Effects 2.1.0 is now available for $210. An upgrade to Version 2 from Adobe After Effects CS4 is priced at $110.
Imagineer Systems; www.imagineersystems.com
WIN
MAC
Sculpting and Modeling Virtual Clay Tactus Technologies has unveiled Protean, a 3D virtual clay sculpting and modeling software for producing fast, early-stage designs for engineering, animation, and other applications. Protean is designed to simplify real-time 3D volumetric modeling, resulting in efficient, freeform shape and model creation. The easy-to-use, tool-based modeling structure enables rapid design and prototyping, delivering design and modeling capabilities to industrial designers, engineers, creative artists, sculptors, architects, animators, game designers, marketers, bioengineers, and others. Protean emulates clay modeling and enables users to deform models by pushing, pulling, twisting, and stretching the virtual clay material. Proteancreated 3D models can be imported into popular software such as Autodesk’s 3ds Max, for use in larger-scale animation or product design projects.
Tactus Technologies; www.proteanclay.com, www.tactustech.com
continued from page 46 add texture management to any tool in ZBrush to paint textures as you model. ZBrush can handle levels of geometry much higher than most other 3D software, making it easy to create models so rich that they’re hard to import elsewhere. To facilitate this, Pixologic has developed GoZ, another important feature that allows for tighter integration with other 3D software, such as Autodesk’s Maya, Pixologic’s Modo, and Maxon’s Cinema 4D. GoZ not only exports 3D geometry, but it also exports such information as bump, normal, and displacement maps. This allows for a model to be geometrically lit, with the detail added through simple image maps. Upon import into a third-party package, GoZ uses these image maps to set up all the shading networks for you. GoZ will take care of simple operations—such as correcting point and polygon order—as well as more advanced operations that require complete remapping. The updated mesh is immediately ready for further detailing, map extractions, and transferring to any other GoZ-enabled application. For those who are using ZBrush as a paint package, the 3D nature of the software can add a lot of power. Pixologic calls this “2.5D,” and it allows you to paint in relief. Images painted in 2.5D can be further modified by changing the lighting, for instance. Additionally, Pixologic now adds a 2D sketching feature called QuickSketch. This is basically a sketching tool that allows you to quickly record your ideas in 2D and then move the roughs into a 3D space. Overall, I really enjoyed ZBrush. The software definitely has a learning curve, but once you get the hang of it, sculpting in 3D is a natural and intuitive process. Anyone who wants to create highly detailed and realistic models should give the software a look. n
March 2010, Volume 33, Number 3: COMPUTER GRAPHICS WORLD (USPS 665-250) (ISSN-0271-4159) is published monthly (12 issues) by COP Communications, Inc. Corporate offices: 620 West Elk Avenue, Glendale, CA 91204, Tel: 818-291-1100; FAX: 818-291-1190; Web Address:
[email protected]. Periodicals postage paid at Glendale, CA, 91205 & additional mailing offices. COMPUTER GRAPHICS WORLD is distributed worldwide. Annual subscription prices are $72, USA; $98, Canada & Mexico; $150 International airfreight. To order subscriptions, call 847-559-7310. © 2010 CGW by COP Communications, Inc. All rights reserved. No material may be reprinted without permission. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Computer Graphics World, ISSN-0271-4159, provided that the appropriate fee is paid directly to Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. Prior to photocopying items for educational classroom use, please contact Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. For further information check Copyright Clearance Center Inc. online at: www.copyright.com. The COMPUTER GRAPHICS WORLD fee code for users of the Transactional Reporting Services is 0271-4159/96 $1.00 + .35. POSTMASTER: Send change of address form to Computer Graphics World, P.O. Box 3296, Northbrook, IL 60065-3296.
48 48
March 2010
#26364 - CGW IO Express Ad:Layout 03/12/2009 09:57 Page 1
www.aja.com
Cross-platform power. In the palm of your hands.
Designed for today’s fast-moving file-based workflows, Io Express is a new cross-platform interface for video professionals working with Apple ProRes 422, Apple ProRes 422 (HQ), XDCAM HD, DVCPRO HD and more. Io Express is ideal for capture, monitoring and mastering - on set, or in the edit suite. Compact, portable and affordable, it’s loaded with flexible I/O that provides professional HD/SD connectivity to laptops or desktop systems, while our industry-proven drivers deliver extensive codec and media support within Apple Final Cut Studio and Adobe CS4.
To find out how Io Express can unlock the potential of your file-based workflows, visit us online at www.aja.com.
For Mac and PC Uncompressed I/O – supports popular CPU-based codecs in Apple Final Cut Pro, Adobe CS4, and more HD/SD digital input and output via 10-bit HDMI and SDI HD/SD component analog output 10-bit HD to SD hardware downconvert
Io Express with ExpressCard/34 Adapter for Laptops
Io Express with PCIe Adapter for Mac Pro and Towers
Io Express. Because it matters.
DVCPRO HD, HDV and Dynamic RT hardware scaling acceleration in Apple Final Cut Pro