Producing 24p Video
This Page Intentionally Left Blank
Producing 24p Video John Skidgel
San Francisco, CA
Published by CMP Books an imprint of CMP Media LLC 600 Harrison Street, San Francisco, CA 94107 USA Tel: 415-947-6615; Fax: 415-947-6015 www.cmpbooks.com, email:
[email protected] Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where CMP is aware of a trademark claim, the product name appears in initial capital letters, in all capital letters, or in accordance with the vendor’s capitalization preference. Readers should contact the appropriate companies for more complete information on trademarks and trademark registrations. All trademarks and registered trademarks in this book are the property of their respective holders. Copyright © 2005 by CMP Media LLC, except where noted otherwise. Published by CMP Books, CMP Media LLC. All rights reserved. Printed in the United States of America. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher; with the exception that the program listings may be entered, stored, and executed in a computer system, but they may not be reproduced for publication. The publisher does not offer any warranties and does not guarantee the accuracy, adequacy, or completeness of any information herein and is not responsible for any errors or omissions. The publisher assumes no liability for damages resulting from the use of the information in this book or for any infringement of the intellectual property rights of third parties that would result from the use of this information. Managing editor:
Gail Saari
Layout design:
John Skidgel
Cover design:
John Skidgel
Editors:
Allison Skidgel and Hastings Hart
Technical editor:
Richard Young
Distributed to the book trade in the U.S. by:
Distributed in Canada by:
Publishers Group West
Jaguar Book Group
1700 Fourth Street
100 Armstrong Avenue
Berkeley, CA 94710
Georgetown, Ontario M6K 3E7 Canada
1-800-788-3123
905-877-4483
Library of Congress Cataloging-in-Publication Data Skidgel, John. Producing 24p video / John Skidgel. p. cm. Includes bibliographical references and index. ISBN-13: 978-1-57820-263-8 (alk. paper)
ISBN-10: 1-57820-263-9 (alk. paper)
1. Video recordings--Production and direction.
I. Title.
PN1992.94.S56 2005 384.55’8--dc22
2005033351
For individual orders and for information on special discounts for quantity orders, please contact: CMP Books Distribution Center, 6600 Silacci Way, Gilroy, CA 95020 Tel: 1-800-500-6875 or 408-848-3854; fax: 408-848-5784, Email:
[email protected]; Web: www.cmpbooks.com Printed in the United States of America 05 06 07 08 09 ISBN: 1-57820-263-9
iv
5 4 3 2 1
Dedication For Beatriz, who has brought Allison and me so much joy.
v
This Page Intentionally Left Blank
Contents Dedication
iii
Acknowledgements
vi
Introduction
1
The 24p Revolution
2
The Book’s Audience
3
Book Features
3
Before You Begin
5
Film, Video, and 24p
7
Film, Video, and 24p
8
Film
8
Focus
86
Focal Length
87
Movement
92
Shot Sizes
101
Camera Craft
109
Documentary Cinematography
119
Video Engineering
121
Understanding a Camera’s Basic Controls
131
Scene Files and Custom Presets
133
24p Audio
141
The Importance of Sound
142
Microphone Characteristics
144
Monitoring and Recording Audio
154
24p Editorial and Postproduction
161
Editing and 24p Postproduction
162
Editorial Strategies
164
Acquiring and Interpreting 24p Material
171
Converting Interlaced Video to 24p
179
Video
17
24p
28
Digital Video Formats
33
The End of Tape Acquisition
38
24p Preproduction
43
Preproduction
44
Business Activities
44
Get a Crew
57
Casting
66
24p Output Options
199
Storyboarding and Previsualization
71
Compressing 24p for DVD
200
Production Design
74
Compressing for Internet Video
207
Rehearsals
75
Transfering Video to Film
220
Acquiring Equipment
76 Glossary
225
Index
235
24p Cinematography
81
24p and Cinematography
82
The Purpose of the Shot
82
Acknowledgements Thanks to Dorothy Cox, Gail Saari, and Paul Temme at CMP Books, for helping me realize this book. Cheers to Allison Skidgel and Hastings Hart, the editors, for improving my prose. Much appreciation to Richard Young, my technical editor, for pushing for more explanations and getting persnickety. A huge round of applause to Jean-Paul Bonjour and Zach Fine for their production stills. And here’s one last round of acknowledgement to Jean-Paul Bonjour, Dan Cowles, Anthony Lucero, Eric Escobar, and Joel Gardner for granting interviews.
viii
Producing 24p Video: Producing 24p Video
Introduction It all started in 1991 with the introduction of QuickTime. Fast forward fifteen years and filmmaking is more accessible than ever.
Introduction: Producing 24p Video | The 24p Revolution
1
The 24p Revolution In the past fifteen years we have seen a revolution in the way film and video are shot, edited, and distributed. This revolution began and will continue because digital technologies are rapidly replacing analog ones and in some cases are replacing current digital solutions. DVDs have replaced VHS tape, but they will quickly be supplanted by on-demand Internet video. For me, this revolution began in 1991 as I sat in my dorm room watching a 180 × 160 pixel QuickTime movie running at 15 frames per second. It was a scene from Terminator 2 when California’s 38th governor famously said, “I need a vacation.” If someone had told me that editing a 1280 x 720 pixel movie at 24 fps on a laptop would be possible before I had my first child, I would have thought that they needed a vacation.
1991
1992
1993
QuickTime 1.0 released by Apple Computer.
Premiere 1.0 released by Adobe Systems.
After Effects 1.0 released by CoSA.
1995 Firewire approved as IEEE 1394.
1997
1999
1999
1999
The first DVD players are sold in the US.
Power Macintosh G3 with Firewire released by Apple Computer.
DCR-VX1000 released by Sony.
Final Cut 1.0 released by Apple Computer.
1999
2000
2002
2005
Episode I shot with Sony’s Cine Alta HD Camera.
Magic Bullet is developed by The Orphanage.
DVX-100 24p SD Camera released by Panasonic.
HVX-200 24p HD Camera released by Panasonic.
Figure 1: Important milestones in the development of digital video
Fast Forward Today, the economy of the 24p format offers a path to finishing on film at a fraction of the cost of originating with film. It accomplishes this feat by matching the frame rate of film and by storing frames progressively. Smaller DV cameras that support 24p offer creative options that are not possible with larger film or video cameras. However, the 24p format is not solely responsible for this revolution. Other factors are: • Mini-DV filmmaking production equipment, such as lights, tripods, and microphones, are now lighter, more compact, and cheaper, thus supporting this filmmaking style.
2
Introduction: Producing 24p Video | The 24p Revolution
• Desktop video applications enable the filmmaker to edit, composite, and distribute their creations to video, film, DVD, and the Internet. • The progressive nature of 24p wins again when shooting blue or green screen footage. Since 24p frames are complete and not interlaced, it is easier to pull usable mattes. • Those broadcasting material at 29.97 fps may benefit from originating at 24p because it gives the production a film look that is preferable. Those looking to distribute their content on film are not the only ones who benefit from 24p. DVD and Web video producers benefit because progressive frames are easier to compress than interlaced ones, also because 24p uses 20 percent less space than 30 fps video.
The Book’s Audience I wrote this book with different audiences in mind: I wrote this book for the filmmaking student who has access to 24p cameras; I wrote it for the aspiring narrative and documentary filmmaker who has purchased a 24p camera and after the purchase did not know where to begin; and finally I wrote it for creative professionals who are familiar with video, but want to know how to take their skills to the next level in order to capitalize on the huge demand for desktop video services, such as DVD production and Internet video.
Corporate Video Producers
Independent Filmmakers
are interested in 24p for creating better web video.
are considering 24p as a production format.
Creative Professionals
are looking to expand into video with 24p.
Film Students
are choosing 24p for its film look and lower cost.
Figure 2: The audience for this book
Book Features This book covers 24p pre-production, production, post-production, and distribution. It is like a three-week crash course in video production with a prerequisite that you own, borrow, or rent a 24p camera and other related equipment. So before you buy into the promise of 24p as your
Introduction: Producing 24p Video | The Book's Audience
3
production workflow, read the rest of this chapter. It covers the book’s features, organization, and remaining prerequisites.
How this Book is Organized Each chapter begins with an explanation on the topic at hand, presents examples, tips, and featured case studies. It is my hope that you will get the right amount of background information before applying the techniques yourself as well as benefit from hearing how other filmmakers are creating content. To help you produce 24p content, this book explains all the facets of 24p production. The book is organized into four sections: a 24p primer, 24p production, 24p postproduction, and 24p output options. A glossary and an index round out the book. For book updates, extra tutorials, and other 24p resources online, go to the book’s web site at: http://www.skidgel.com/writing/24p/index.html
Important Notes Important information is separated from the main body of text by a pair of dashed rules. Production tip Time-saving methods for 24p production and software usage tips.
Cautionary note Production gotchas to avoid.
Technical note Technical definitions of camera and digital video terminology.
Web reference Links to Internet resources.
On the DVD See material on the book’s DVD.
Figure 3: Book icons used throughout the book
Chapter Overview The first chapter details how 24p works and its relation to traditional film and video. Chapter 2 gives an overview of the entire production workflow and stresses the activities you do during pre-production, the planning stage before production, to ensure that the entire effort runs smoothly. Chapter 3 covers 24p cinematography. The chapter covers all of the following: how to frame a shot, camera movement, production tips, and how to adjust settings on different cameras. Chapter 4 demonstrates location sound recording. We will contrast using the onboard microphones versus external shot gun, lavalier, and omnidirectional microphones. The chapter also discusses the ergonomics of proper boom pole operation and how to make sure the sound is as good as (or better than) the picture. Chapter 5 covers editing and post-production. It outlines the steps you need to take in order to ensure Apple Final Cut Pro or Adobe Premiere Pro will properly interpret 24p footage. It also shows how to convert 60i footage to 24p using software such as Magic Bullet and After Effects
4
Introduction: Producing 24p Video | Book Features
and Nattress Film Effects. Chapter 6 discusses distribution options for 24p video, such as DVDvideo, Internet video (Flash, QuickTime and Windows Media), and video-to-film transfer.
The accompanying DVD-ROM I wanted the accompanying DVD to be more than just a coaster for your favorite beverage. It has sample footage in 24p DV as well as 24p 720p HD. When you watch it you will watch interviews with editors, narrative and documentary filmmakers, and corporate video producers who all work with 24p. The DVD also contains production check lists and forms.
Before You Begin To shoot and produce video by professional standards you need more than just a camera. The following items are what I would suggest to a new filmmaker looking to get gear. If I were buying, I would generally buy these items in the order which they are listed too. • Camera: It is safe to assume that you are or will be producing 24p video. That said, you should own, borrow, or rent a 3-CCD video camera capable of recording 24p material. At the time of writing this book, your options are the Canon XL2, the Panasonic DVX-100, DVX-100A, DVX100B, HVX-200, SDX-800, or SDX-900. Any of the DVX-100 models or the XL2 will suffice. • Tripod: A video tripod with a smooth fluid head and level rounds out your camera kit. Although hand-held shots are great, they are not meant for all shots. If you videotape scenery, interviews, or other B roll (extra footage), a level tripod gives you steady footage and smoother pans and tilts. • Microphone and Mixers: Sound is the neglected stepchild in beginning video production. While the onboard microphones are improving, the sound of a lavelier or shot gun microphone mounted on a boom pole is immensely better. A portable sound mixer from Sound Devices or Shure is also a valuable tool for ensuring that audio levels do not become useless for broadcast by becoming too hot. Camera rental houses also offer location sound packages. • Video Deck: If you own a video camera already, consider purchasing a video deck to capture the video from the camera. A video deck saves wear and tear on your camera and saves you the time of reconnecting your camera to the computer. • Professional Video Monitor: You should always use a professional television monitor to preview your broadcast video project, even if your computer has action and title safe guides. If you plan to edit widescreen content, look into a video monitor that can switch between 4:3 and 16:9 aspect ratios. Most filmmakers opt for a 14-inch model. Smaller field monitors are also incredibly useful if the shoot warrants it. If you have the luxury of time and will not be moving around a lot, as for example in a sit down interview, an 8- or 9-inch field monitor can help you frame your shots better and give you a much better preview of the final output than the camera’s viewfinder or flip-out LCD screen because it has a higher resolution.
Hardware and Software To produce 24p video, you will need a modern computer. When I say “modern,” I mean something released in the past year and a half that has ample memory, processor speed, and hard disk space. Both modern Macintosh and PC computers are very capable of editing 24p material.
Introduction: Producing 24p Video | Before You Begin
5
Choose whatever platform that’s most comfortable for you. If you’re new to buying a computer for video production, here’s advice that I give: • Processor: Get the fastest processor you can afford. Do not go under a 800 mhz G4 (Mac OS) or Pentium III (Windows). I suggest purchasing the fastest G4/G5 or Pentium® processor you can afford. If you can afford it, consider purchasing a multiprocessor system. Video rendering, video encoding, and disc burning all run faster with these processor enhancements. • Memory: At least 512 megabytes is recommended. If you can afford one gigabytes or more of memory, go for it. The additional memory will decrease rendering times and give your machine more rendering capability. Adding additional memory is a great way to breath new life into an older machine too. • Storage: Because 24p footage is between three and four megabytes per second, large-capacity hard disks are a necessity. Have at least 120 gigabytes or more of available storage. Add additional storage such as a second hard disk or a disk array if you plan to edit video on the same machine. • Operating Systems: If you working on a Macintosh, you will need OS X to work with 24p material. Final Cut Pro 4 and Final Cut HD, which both support 24p material, do not run on OS 9. If you are working on a Windows PC, I suggest Windows XP Professional. Most of the modern video programs require XP because they take advantage of features only available in XP. • Display and Video Card: Between editing and effects, you should have at least one large (1280 x 1024 pixels) monitor. Two monitors are highly recommended. With two displays, you can spread multiple windows out and quickly access items without having to close and reopen windows. Your display card should support full color (referred to as “millions of color” or 24-bit color) at 1280 x 1024 pixels. If you plan to support two displays with a single card, I would recommend purchasing a card with 128-512 megabytes of video memory. If you plan to use programs such as Adobe After Effects to do compositing, motion graphics, and special effects work, buy a card that has very good Open GL acceleration and supports GPU-based (graphics processor unit) effects. • DVD Drive: Nowadays, video software and tutorial content ship on a DVD, so you need a DVDROM drive. If you plan to create your own DVDs, you’ll need a DVD drive that burns DVDs. • Sound: A stereo sound card is required in order to hear audio. In addition, you should consider purchasing external stereo speakers. Reference monitor speakers are a must and will help you with sound editing. • Digital Camera: A digital camera is a great resource for taking photos quickly and transferring photos to Photoshop for embellishment and color correction experiments. A camera also helps with location scouting, shooting continuity photographs, and shooting production photos. I wish you the best of luck in your filmmaking endeavors, and I hope my book helps you with your projects. Let me know how your experience goes by using the contact form at: www.skidgel.com/about/index.html
6
Introduction: Producing 24p Video | Before You Begin
Film, Video, and 24p Usually film and video differ in resolution and timing, but the video format 24p has both the flexibility of video and the look of film.
Chapter 1: Film, Video, and 24p | Film
7
Film, Video, and 24p If this book is about 24p digital video, why discuss film? This chapter traces film’s historical significance in the context of video and 24p. So in the cases when you want to do a film to video transfer for a film festival, you want your project to have a film look, or you want to save space on a DVD project, this chapter will help you better understand 24p and the filmmaking process.
Film Three inventions are responsible for the rise of motion pictures: flexible photographic film, improved transport mechanisms, and synchronized sound recording.
Exposing Film Color film stock for a motion picture camera is like color film for a still camera; it is coated with three layers of light-sensitive emulsion that records an inverted image, or negative, when exposed. These three layers are sensitive to the three primary colors of light: red, green, and blue. During exposure, a negative is formed in each of the emulsion layers that corresponds to a primary color. Dyes in each layer harden depending upon the scene’s brightness, saturation, and hue. What is light in the original is dark in the negative and vice versa. Emulsion layers
Composited negative Positive image
Figure 1: Film exposure
The recorded image is a negative; its tone and color information are inverted from the original image. The exposure is controlled by a camera’s shutter. The image results from the camera’s lens focusing an image onto the emulsion. Its flexibility facilitates being rolled, threaded, and moved throughout the camera by the transport mechanism. The mechanism ensures consistent recording.
8
Chapter 1: Film, Video, and 24p | Film
From Lab to Edit Room Before the time of digital intermediates (DI), the workflow from the processing and editing to processing the final release prints took the following form. First Generation unprocessed film
original negative
Second Generation
Third Generation
Fourth Generation
duplicate negative
release print
master positive
working positive Final Cut used to match back to negative to create master
Telecined copy for editing
Figure 2: From developed film to theatrical release
When the film is treated chemically, silver halide washes out of the unexposed areas of the emulsion, and dark areas appear light and light areas appear dark. This negative is the first generation of the film. A working print, or second generation, are made by printing the negative onto an intermediate reel of film. The working print, as its name implies, is used for editing. When editing is completed, the original negative is matched and cut according to the edits made with the working print. This is then called the “final cut” and it is duplicated (third generation) and used to make release prints (fourth generation). With all of this processing it’s not too hard to imagine the resolution that is lost between the original film stock and the print shown in a theater.
Persistence of Vision Cinema creates narrative (or recreates reality) by fusing sound and picture into a sequential medium. This medium relies upon persistence of vision, the brain’s ability to retain an image from the retina for a fraction of time longer than it was recorded. The illusion of motion occurs when the frequency of sequential frames does not appear to flicker. The research behind the theory of persistence of vision determined the established frame rates for film and video.
Side 1
Side 2
Spun together
Figure 3: When spinning a zautrope, persistence of vision can be demonstrated
Chapter 1: Film, Video, and 24p | Film
9
Film’s Frame Rate Originally there were no standard frame rates for shooting motion pictures since the early cameras were cranked by hand. When mechanical versions were developed, 16fps was the standard at first and increased in frame rate until “talkies,” movies with sound, arrived. At this point, 24fps became the standard. Refresh Rate It is widely accepted that motion pictures playing back at less than 16fps flicker. While today’s generation of video gamers might argue that 60fps or more is the requirement, the flicker effect is almost eliminated entirely at 48fps. Early film inventors toyed with a frame rate of 48fps but realized this was too costly for production. They settled on 24fps. To reduce flicker, each of the 24 frames is shown twice, which translates to a refresh rate of 48fps. Frame rate and flicker (also referred to as “refresh”) rate are often confused. Frame rate is the number of frames recorded and then displayed each second. The higher the frame rate, the better detail in motion is present. Flicker is the number of times an image is updated during presentation.
Looking at a Film Camera When I was in art school, I had a calligraphy professor who told me that hand-drawn letter forms are partly a result of the tool used to create them. (The other part being ability.) This notion also applies to film and video cameras. The more sophisticated a camera is, the more sophisticated the images can be that are created with it. The key components of a film camera that contribute to what it can reproduce are the lens, the shutter, the exposure plane, and the transport mechanism.
A matte box and French flags protect the lens from sun glare A follow focus drive facilitates repeatable focus pulls for a shot
Figure 4: Film camera
10
Chapter 1: Film, Video, and 24p | Film
Matte Box A matte box is a hood that extends around the lens and shields it from unwanted glare. Some matte boxes are a fixed size and are made of metal or hard plastic, while others are made to fold in and out like an accordion. The accordion mechanism allows the matte box to fit around different sized lenses. Often both kinds of matte boxes often have a slot to hold one or more lens filters. French flags are extensions that pivot on the edges of the matte box and allows for flexibility when blocking light. Filters A filter is a piece of glass that filters the light for a corrective or aesthetic purpose. There are several types of filters: diffusion, polarizers, neutral density, color, graduated, diopter, and UV. Filters are made to screw over a lens or to slide into a matte box. In today’s age of digital postproduction, filters used for aesthetic purposes should be used sparingly, if at all. One can easily achieve the effects of these filters in post, and often there are more creative options for how the filter is applied in a program such as After Effects or Combustion. When you shoot with a filter you are also married to it because it is nearly impossible to remove the filter’s effect in post.
Figure 5: Filters
Lens A lens focuses light from the objects in front of it and transfers the light onto film. The light creates an exposure on the film, which is a record of the objects in front of the lens. With motion picture film shot at normal speeds, the exposure occurs every 1/24 of a second, and all the char-
Chapter 1: Film, Video, and 24p | Film
11
acteristics of the image captured are determined by the type of lens and its mechanics. The two types of lenses most often used in film and video production are prime and zoom lenses. Lenses are not always interchangeable. This is not necessarily because of differences between camera manufacturers. This is to say that a lens designed for a 35mm film camera will not immediately work on a 16mm film camera. This is because the lens has to be designed with the size of its target in mind. A 35mm film lens is designed to image upon an emulsion that is 35mm. Prime Lens A prime lens is a combination of lenses that have a fixed focal length. A shorter distance between the lens and the film equates to a wider angle of view. Conversely, the longer the distance, the narrower the angle of view. There are a few types of common prime lenses: normal, wide, telephoto, and fish-eye. A normal lens records an image with little to no distortion, like the human eye. A telephoto lens makes distant objects appear bigger and is a very long lens. A wide-angle lens has a short focal length and records a wide angle of view. A fish-eye lens is an incredibly wide-angle lens that almost gives a view angle of 180 degrees. Prime lenses are simpler in construction than zoom lenses but cannot record subjects at varying focal lengths. Since they are simpler mechanically, they are often greatly optimized for sharpness and clarity at a given focal length. Prime lenses have more f-stops than a zoom lens, and as a result, are better at recording images in low light and achieving a narrower depth of field.
Figure 6: Prime lens
Zoom Lens A zoom lens has a variable focal length for focusing upon a narrower angle of view. The focal length is altered by two elements, one that magnifies the subject while the other maintains focus. The zoom capability is determined by the ratio of the lens’ longest-to-shortest focal lengths. For example, if a zoom lens has a variable focal length between 1,000 and 100mm, the zoom capability for the lens is 10×.
12
Chapter 1: Film, Video, and 24p | Film
A zoom lens has a variable focal length and can act like a wide, normal, or telephoto lens. This is because it has a mechanism that allows the focal point to move closer or farther away from the film. A zoom lens allows the camera to remain fixed while the lens does the work of bringing the subject closer to the viewer. Of course, this should be done only when you cannot move the camera towards the subject. Any good director of photography will tell you that using a dolly to move a camera towards the subject yields a more realistic camera move. Prime lenses are preferred over zoom lenses because a prime lens produces better images as a result of having better optics. In addition, the fixed focal length of a prime lens guarantees the same angle of view. With a zoom lens, one can easily forget its current focal length since it is variable.
Focus Ring and Follow Focus Gear The primary control a lens has (besides zoom for zoom lenses) is the ability to change focus or depth of field. Depth of field is the area in front of the lens where items remain sharp and in focus. To control depth of field there is a focus ring on the lens that when rotated alters the depth of field. As cameras have become more complicated and setups require more than one person to operate the camera, cameras have become equipped with gears, a barrel with a markable surface, and a knob known as a follow focus gear.
Iris The iris, or diaphragm as it is sometimes called, controls the amount of light, or aperture, that is exposed onto film. It operates just like the iris in an eye. When in a dark room, the iris opens fully to capture more light so you can see. When in bright light, the iris closes to reduce the intensity of the light. A mechanical iris is a set of thin spring-loaded sheets of metal that expand and collapse to control the camera’s aperture. Aperture is measured in f-stops. An f-stop is a designation describing the aperture opening in relation to the focal length. An f-stop of f/2 signifies that the diameter of the aperture is one-half the focal length; f/4 means that it’s one-fourth; and so on. When you shoot during the day, you might use an f-stop of f/2 to f/4. If you are shooting at night, you will probably want to open the iris as much as possible to f/1. If you are shooting with a lot of bright studio lights, you can turn the iris up to f/11 or f/16.
Shutter A shutter is the bow-tie-shaped door that spins in front of the lens. When it covers the lens, it prohibits light from passing through. When it does not cover the lens, light is free to pass through. Unlike a still picture camera, a motion picture camera cannot truly vary the shutter speed. Since film is 24fps, the shutter must remain at 24fps. Cinematographers get around this limitation by having a shutter that can vary the amount of light that passes through it. It is not the iris, but it is a second set of shutters that are slightly offset from the standard shutter. When they perfectly overlap, more light comes through the shutter at a given time, and when the second set is spread out and there is less overlap, less light reaches the film.
Chapter 1: Film, Video, and 24p | Film
13
Transport Mechanism A film camera records at a consistent frame rate each second because it has a transport mechanism that is mechanically precise. Film is consistently advanced from the feed reel to the take-up reel with toothed gears that catch onto the perforations on the film’s edge. An additional pulldown mechanism within the reel also grabs the perforations in order to hold and expose each frame while the shutter is open. These mechanisms, the shutter, the pulldown, and reels, work in concert. Advanced film cameras can vary the rate at which motion is captured and are called variable frame rate cameras. Undercranking refers to recording under 24fps. When undercranked film is played back at a normal rate, it appears fast. Overcranking refers to recording over 24fps. When overcranked film is played back at a normal rate it appears slow.
Film Sizes The sizes most often used for motion picture production are 16mm and 35mm. While there are sizes smaller, 8mm and Super 8mm, and larger, 65mm and 70mm, we will not cover these sizes because these are not as common. To learn more about 8mm, go to Kodak’s motion picture film resources page at: www.kodak.com/US/en/motion/super8/
35mm
Super 16mm
16mm
Figure 7: Film sizes
16mm and Super 16mm 16mm film is a low-cost alternative to 35mm because it has cheaper film stock and developing costs. As a result, it has been a popular choice among the independent, student, and budget-conscious filmmakers.
14
Chapter 1: Film, Video, and 24p | Film
There are two flavors of 16mm film, regular 16mm and Super 16mm. 16mm has a smaller imageable area. This is due to the fact that it must leave room for the soundtrack when developed. Super 16mm was developed for those who want to shoot using 16mm camera equipment and blow the negative up to 35mm. Super 16mm sacrifices a soundtrack for more image area, so filmmakers shooting in 16mm need a separate method for recording sound.
Special Film Development Processes Filmmakers and cinematographers are always trying to find new looks for their films. For example, in the Fifties, it was the harsh chiaroscuro look of film noir. In the Nineties, it was the grunge look of Seven, the gritty desaturated appearance in Saving Private Ryan, or the omnipresent green fluorescence of The Matrix. To meet these demands, a near renaissance has taken place at film laboratories with regard to how film is creatively processed. When normal film is developed, it is run through a developer which develops the image on the negative and transforms silver halide into metallic silver. It is run through a stop bath which removes the developer solution from the film. It goes through an additional wash that further cleans the film of developer. It goes through a fixer solution which removes any additional silver halide from the emulsion. Finally, the fixer is washed off and the film is dried and lubricated. These looks are created by altering the development process to retain additional silver halide, which makes the blacks darker. More subtle methods are used to get velvety blacks while maintaining shadow detail and not overly desaturating the image. In most cases these steps are done on dupes of the original negative, but sometimes the filmmaker is willing to take risks and have it done on the originals. Luckily, through digital postproduction you can achieve most of the effects of these chemical processes and you don’t have to handle any nasty chemicals or apply them to your original master tapes. In Chapter 5 we will cover how to create these looks in post.
Original Footage
LS Diffusion Max
LS Bleach Bypass
LS Neo
Figure 8: Special film treatments
Chapter 1: Film, Video, and 24p | Film
15
Film’s Aspect Ratio Frame aspect ratio is the proportional relationship between an image’s width and height. Frame aspect ratio is a relative measurement, and it should not be confused with resolution, which is absolute. The aspect ratio for films before the late Fifties was four units wide to three units high. This is the same as saying that the width is one-third longer than the height, and thus you may hear this aspect ratio referred to as 1.33. This aspect ratio is the same aspect ratio as the majority of television sets today. 2.40:1
16:9 (1.78:1)
4:3 (1.33:1)
1.85:1 4:3 (1.33:1) Letter boxing wide aspect ratios on 4:3 television 16:9 (1.78:1)
1.85:1
2.40:1
Figure 9: Aspect ratios
Wide Aspect Ratios In the late Fifties, when television began to aggressively compete with cinema, the film industry introduced wider aspect ratios as a way to differentiate films from television productions. The wider aspect ratio also gave the director and the cinematographer a more pleasing frame in which to compose their shots. These aspect ratios are 1.85 and 2.40.
16
Chapter 1: Film, Video, and 24p | Video
1.85 Aspect Ratio In reality, the 1.85 aspect ratio did not contain more information than the original 1.33 aspect ratio. Movies were still shot with film that had an exposable area of 1.33, but the shots were composed in a sub area of the frame that constituted a 1.85 image. The unwanted areas of the frame were blocked off in camera, in the development process, or by the projector when the film was shown. 2.40 Aspect Ratio 2.40 is an even wider aspect ratio than 1.85. It is also referred to as CinemaScope or Anamorphic. Pictures using 2.35 were also shot on film with an aspect ratio of 1.33, but instead of cropping the image even further, the picture is shot with an anamorphic lens, which optically squeezes a wide-format image into the 1.33 film frame. As a result, 2.40 film contains greater detail than 1.85, but it requires that the film projector has a lens that can unsqueeze the anamorphic image when projecting it onto the screen. Film Sound Recording Film sound recording is done on a separate machine: a Nagra, a digital audio tape (DAT) recorder, or a digital recorder that uses optical or solid state recording media. The audio recording can be synchronized with the film recording by jam-synching the devices together. This process relies upon an accurate crystal that
sets the timing and recording of both devices to be the same. The clapper or slate seen in film production is used to synchronize sound and picture if the cameras are not jam-synched. To do this, when editing, lock the picture of the clapper closing with the sound of it closing, and everything is synchronized.
Video While there will always be purists, cinematography, like its sibling photography, is evolving from a chemical and analog process into an electronic and digital one. Video, on the other hand, does not have a contingent of curmudgeons clinging to analog video formats. The news and video production industries have been happy to let go of analog video in favor of digital formats such as Digital Betacam, dvcam, DVCPro, and even Mini DV. With HD production becoming a reality, analog formats will most likely disappear entirely. Digital is blurring the lines between our notions of film and video as resolution increases and the production costs decrease.
Components of a Video Camera Like the image of a 16mm or 35mm film camera, the image produced by a video camera results from the sophistication and quality of its components. The saying “You get what you pay for” certainly applies to video cameras. Under the same lighting conditions, a $500 consumer camcorder is not going to match the quality of a $3,000 video camera. Under optimal lighting conditions, this may be okay, but as soon as there is less light, the features of a better (more expensive) camera compensate for poor lighting conditions, whereas a consumer camera cannot. This is all to say that knowing how a professional camera works can help you decide what camera to purchase or rent. More importantly, such an understanding enables you to shoot better pictures, gives you more creative options, and helps you improve your production workflow.
Chapter 1: Film, Video, and 24p | Video
17
A mini shotgun mic connects to XLR inputs on the far side of the camera and offer better pickup than the onboard stereo microphones.
Three dedicated CCDs record better in low light situations.
A sun shade protects the lens from sun glare
Audio level controls offer simple controls over audio recording levels and prevent hot audio.
Figure 10: Video camera
From Light to a Digital Image A digital video camera creates a digital representation of the image in front of the lens by focusing light onto the camera’s charge coupled device (CCD). A full-color image is created by separating red, green, and blue light from the lens with mirrors or a prism and directing these rays to independent red, green, and blue ccds. From each CCD, the picture information is sent to a digital signal processor (DSP) where the information is processed and compressed into an image that is recorded onto tape.
Lens As in a film camera, a video camera’s lens focuses rays of light onto a CCD to create an image. The quality of a lens contributes to resolution and image fidelity. Any given lens will have a focal length and a zoom ratio. Additional controls on the lens affect exposure and focus. Zoom lenses are more common in video cameras, but professional video cameras can also be equipped with a prime lens. Focal length, zoom ratios, and a host of other cinematography issues are discussed further in Chapter 2, Cinematography. Zoom Lens All consumer and most prosumer DV cameras have zoom lenses. Professional broadcast cameras allow for either a zoom or prime lens. Zoom lenses are essential for electronic news gathering (ENG) and documentary work because it gives the camera operator more agility in capturing important action. This flexibility is also considered advantageous because one lens does the work of three. Zooming, however, is rarely done in movies because in real life people cannot zoom with
18
Chapter 1: Film, Video, and 24p | Video
their eyes. When the director or the cinematographer wants to be closer to a subject, they simply shoot closer to the subject or they move the camera towards the subject on a dolly. Beware of digital zoom. It is not a true optical zoom, but a zoom recreated in camera by the camera’s digital signal processor. If you want to zoom in more than your camera allows, you are better off creating the zoom in your editing or effects software. Prime Lens Most standard and high-definition video cameras aimed at broadcast news or film production can be equipped with a prime lens. But like film, a prime lens has to be appropriate to the size of its target. In the case of broadcast news or film production, that means the camera’s CCD. Adapters such as the Pro 35 and Mini35 digital image converters by P+S Technik allow a 35mm film lens to work on HD and even on miniDV cameras. These adapters give better control over depth of field as well as a sharper image.
Charged Coupled Device A CCD in a digital video camera is analogous to film in a traditional motion picture camera. Like film, a CCD is sensitive to light, but rather than storing the exposed image permanently, its tiny photo sensors continuously convert the light captured into electronic signals and feed these to the camera’s DSP. Many people mistakenly consider a CCD a digital device. In practice, a CCD outputs a series of analog voltages representing the light absorbed by each of the ccd’s pixels. It is the DSP that converts these analog voltages into a compressed digital signal that represents each field or frame of video. Evaluating a Camera’s CCD Four guiding factors to evaluate a video camera by are the number of ccds in the camera, the number of pixels each ccd has, their size, and the size of their pixels. The resolution of a camera’s ccd is measured in how many photo sensors or pixels it has. More pixels means a higher resolution. A larger CCD tends to have more pixels and will record greater detail, and a larger CCD allows for larger pixels. Larger pixels equates to increased light sensitivity and better detail in the shadows. But even CCDs found in professional camcorders cannot capture the high dynamic range that film can. In general, the output from most CCDs is roughly four f-stops, and film has eight. Entry-level consumer camcorders have a quarter-inch single ccd that records red, green, and blue information. These three color channels combine to create a full-color image. A single ccd, or single-chip, camera is not as good at capturing detail and is less sensitive to light. If the scene it records is not perfectly lit, the resulting image will show a lot of smearing between distinct areas of color. More-expensive cameras have three ccds, one dedicated for each color, and can capture sharper images and are better at retaining shadow detail. After the light passes through the lens, a prism splits the light into the three components of red, green, and blue (RGB) and directs each component color to a dedicated ccd. By dedicating a CCD to each color component, a three-chip camera records more color detail and does not crush the blacks or highlights in an image as quickly as
Chapter 1: Film, Video, and 24p | Video
19
a single-chip camera. Prosumer cameras tend to have CCDs that are one-third of an inch wide, whereas professional broadcast cameras have CCDs that are two-thirds of an inch wide. Next-generation HD cameras are being equipped with image sensors that have an imageable area of 35mm. That’s twice the size of a CCD that’s two-thirds of an inch. But even these large sensors do not match the resolution of film. Film is said to have about 5,000 lines of vertical resolution, and most HD cameras record 1,080 lines. In any case, the signal coming from one or three CCDs is fed into a DSP that creates the fullcolor video image. CMOS Image Sensors Complementary metal oxide semiconductor (CMOS) image sensors are a competing electronic capture medium to CCDs. They are less sensitive to light, more durable than CCDs (less prone to having dead pixels), and cheaper to manufacture. However, they have not reached the performance levels of CCDs. Future cameras based upon CMOS designs will require fewer parts than cameras based upon CCD designs. Building the Shutter into the CCD In a film camera, a shutter is the door placed in front of the lens that allows or prohibits light from passing through. Control over the shutter affects exposure. Older video cameras with a CCD also had a mechanical shutter to prevent excess light from overloading the CCD. When these CCDs were overloaded, the resulting video frames would be smeared. Newer cameras have CCDs that are based upon the frame interline transfer (FIT) design. This CCD design has shutters built into the CCD that control exposure according to the camera’s recorded field or frame rate. Modifying the shutter speed will produce strobing (shorter shutter speed) or smearing (longer shutter speed).
Digital Signal Processor (DSP) The DSP processes the voltage output by each CCD, creating a digital video stream which may or may not be compressed, and records this stream to tape. Better cameras offer manual control over the DSP. These settings include knee, gamma, black gamma, white balance, iris, gain, matrix, and contour, to name a few. By modifying these controls in camera, you can create looks that are similar to the look of film.
The Work of the DSP In this day and age, people are more comfortable with the idea of digital media. Compact discs have been around for 20 years, and DVDs nearly 10. With such comfort comes the loss of knowledge of what it means for something to be analog. It is the DSP which has made analog recording techniques obsolete. In the following sections, the functions of a typical DSP are explained. What It Means to Be Analog and Digital For something to be analog means that is analogous, or similar to the original. Let’s take an example of cymbals clapping. When they collide, sound vibrations are produced. An analog microphone and recorder record the cymbals as an analog waveform. The waveform reproduces sound vibrations that are analogous to the vibrations of the original.
20
Chapter 1: Film, Video, and 24p | Video
So one might ask, “When a digital recording of the cymbals plays back, the sound it produces appears to be analogous to the original sound. What is the difference?” The difference is in how each method represents the recording of the original sound. An analog recording represents the sound as a smooth physical waveform. A digital recording, by contrast, is a sampled, discreet, and often compressed approximation. The Analog to Digital process Another misconception is that DV cameras are completely digital. This is not true, because the CCD outputs an analog signal measured in volts that the DSP converts into a digital signal. This conversion involves two steps: color sampling and quantization. These steps yield a data rate which then can be further compressed before the video is stored or transmitted. Color Spaces Before we talk about color sampling, it is important to have a brief discussion about color spaces. Digital pictures originate in the RGB color space. Television and computer monitors render color using RGB too. Computer-generated motion graphics and renderings also originate in RGB. Broadcast video, however, is broadcast in a different color space known as Y’CbCr. You will also hear Y’CbCr referred to as YUV or possibly YIQ. This is incorrect. Video software developers (or let’s blame their marketing departments) have constantly referred to Y’CbCr as YUV. YUV actually refers to the way Y’CbCr is represented in PAL, and YIQ is the actual way Y’CbCr is referred to in NTSC. Y’CbCr represents luma (Y) and chrominance (Cb and Cr). The Y channel contains all green information as well as parts of the red and blue information. The Cb and Cr channels contain the remaining red and blue information. Broadcast television uses the Y’CbCr color space because it is easier to compress with little noticeable difference (more on that in the next section) and because the luma channel, or Y, offers compatibility with black-and-white televisions. Color Sampling The human eye is better at discerning between shades of gray than it is at discerning between different colors. Video standards exploit this weakness by preserving the luminance channel while taking fewer samples of the color information. Most commonly, color sampling refers to the stored ratio of luminance to chrominance across four lines of video. 4:4:4
Y
4:2:2
Cb
4:1:1
4:2:0
Cr
Figure 11: Color sampling
• 4:4:4:(4) samples every pixel for color and luminance in the 4 × 4 array of pixels. It is used when quality is of the utmost concern and storage space is not an issue. Given a proper converChapter 1: Film, Video, and 24p | Video
21
sion, 4:4:4 Y’CbCr is nearly identical to the original RGB source in picture and size and so it is the highest-quality sampling rate. It is nearly identical because rounding errors can occur when converting between the two color spaces. As a result, 4:4:4 is limited to high-end applications in production and postproduction and is not used in broadcast or other means of distribution. The fourth 4 represents the alpha channel or key when it is present. • 4:2:2 samples every pixel in the first and third columns for luminance and color but samples only luminance for the second and fourth columns. Think of it as all of the flavor with half of the calories. 4:2:2 is used in DVCpro50 and 100 HD gear such as the Panasonic AJ-SDX900 and Varicam. • 4:1:1 samples every pixel for luminance but samples only the first column for color. It’s all the flavor with a quarter of the calories. 4:1:1 is used in NTSC Mini DV. • 4:2:0 samples every pixel for luminance but alternates between sampling Cr and Cb color information. Like 4:1:1, it’s all the flavor with a quarter of the calories, but some bites have pepper and some have salt and you chew to experience the flavor. 4:2:0 is used for broadcast, PAL DV, and DVD, but it also part of the prosumer high-definition video (HDV) standard used in the JVC GR-HD1 and Sony HDR-FX1 cameras. When working with video and computer-generated imagery or even working between different video software packages, a noticeable color shift can result when converting between RGB and YUV color space. Quantization The difference between a frame of digital video and a frame of film is that the digital video frame is described in discreet color values, whereas the color values for a frame of film are continuously variable and infinite. For example, a pixel in an 8-bit video frame has a tonal value between 0 and 255 for each of its three channels Y’CbCr. Quantizing each frame is the next step in analog-to-digital conversion. It involves assigning a precise value to each image pixel based upon the image’s bit depth. In most cases, this is eight bits per channel or 24 bits for all three. For this level of quantization, a pixel is one of 16.7 million colors. The actual number can actually be lower when shooting NTSC since its eight-bit gamut ranges from 16 to 235. Data Rate The data rate for a digital video format is calculated by multiplying the number of horizontal pixels sampled for Y’CbCr by the number of vertical pixels and multiplying this sum by the quantizing level (bit depth) and the frame rate. This calculation is the raw or effective data rate. Applying a compression algorithm can get the data rate even lower. Compression Compression decreases a video segment’s storage and bandwidth requirements by removing or reducing redundant or less-important information. Compression is not always a given with digital video. Depending upon the stage in production, postproduction, or distribution, different compression schemes, or compression-decompression algorithms (codecs), come into play. Codecs
22
Chapter 1: Film, Video, and 24p | Video
are written to solve particular needs. For instance the codec used for remote teleconferencing would not suffice for displaying a feature film in a theater, and vice versa. In the first case, the result would be like blowing up a comic strip to four feet across. In the second case, the teleconference would come to a grinding halt as a video signal 20-30 times the maximum recommended size would render the conversation useless. Lossy and Lossless Codecs DV and MPEG-2 are based upon a discrete cosine transform (DCT) compression algorithm. DCT translates portions of a digital image into frequencies which can be expressed numerically and compressed. DCT is a lossy compression method because image information is lost in the process of making the file size smaller. There are lossless codecs that are based upon wavelet compression that preserve most if not all of the original image information. They are great for saving file size, but they often do not offer real-time decompression, which is important when you are editing material. As a general rule of thumb, you edit with a lossy codec and master with a lossless codec. Intraframe and Interframe Compression Intraframe compression looks for patterns within the same frame, and interframe compression looks for patterns across frames. Intraframe compression tends to be of higher quality than interframe compression. DCT mentioned earlier is an example of intraframe compression. The DV codec employs both intraframe and interframe compression. Compression Ratios Compression ratios represent how efficient a codec is by relating the original size to the compressed size. A compression ratio of 2:1 is considered lossless, and higher compression ratios most likely involve sacrificing some image quality for size. Hardware-based codecs tend to be high-quality or lossless but require hardware such as a board to work. Software-based codecs vary according to their purpose. Streaming and real-time playback codecs tend to have high compression ratios, and there are now almost completely lossless software codecs available for production archiving and exchange. Many DV codecs are not hardware-based and have a compression ratio of 5:1. Compression and Production When producing films, you will want to start with the best codec possible. This will be limited by your camera. When going into post, remain at this codec, but also consider keeping material in a higher-quality codec if you are going to be combining your source material with text, graphics, or effects. When distributing a project, the distribution medium will dictate what codec to use for distribution. In each of the remaining chapters, we will return to compression and video codecs as they become relevant.
Transport Mechanism The transport mechanism, the network of motors that move tape through the camera, is not as crucial to the working of the camera as it is with a film camera. Don’t get me wrong—if the mechanism fails, you won’t be able to record to tape, but since the image is not exposed on videotape and tape-based media’s days are numbered, there is little to talk about. One thing that I will mention, however, here is that the transport mechanisms in a video camera are more delicate than the transport mechanism in a DV videotape recorder (VTR). So you should avoid using the Chapter 1: Film, Video, and 24p | Video
23
camera to capture to your computer if you can, and if you cannot afford to purchase a separate VTR for your computer, look for an inexpensive DV tape rewinder. It’s far better for this $20 device to take wear and tear than your camera’s transport mechanism.
Input and Output (I/O) Most digital video cameras are equipped with analog and digital outputs. The analog ports on most consumer and prosumer cameras are composite and S-Video. A composite video output looks like one of the RCA audio ports. It’s usually colored yellow whereas the audio ports are colored red for the left audio channel and white for the right audio channel. Composite video jams all the color information into one channel. An S-Video port is circular, has several pins, and is of higher quality than composite as it keeps the luminance (how light or dark) and chrominance (what color) in different channels. Both of these video signals, however, are analog, and the output from either of these ports is inferior to the signal that comes from the digital port of the camera. When connecting a camera to a television for a quick preview, you use one of these analog video ports with the analog audio ports. The digital port on a DV camera is used for video and audio, so there’s no need to connect the analog audio ports to your computer. This port is called a FireWire port, and it is also known as an IEEE 1394 or i.Link port. In addition to transmitting the video and audio information in a higher-quality digital format, it transmits metadata about the video that allows you to capture and edit video from the camera on a computer easily. Recording Sound with Video Professional location sound recording techniques have not changed very much from film to video—the recording media probably has changed the most. On a professional shoot, you can manually control audio levels. XLR audio input jacks connect to a sound mixer that
can adjust and mix audio from two or more wireless or shotgun microphones. The mixer is most likely outputting the audio signal to the camera in addition to a second sound recording device such as a Nagra, DAT, or solid-state audio recorder.
NTSC and PAL Video National Television Standards Committee (NTSC) is the video broadcast standard used in the United States, Canada, and Japan. Phase Alternate Line (PAL) is the video broadcast standard in Europe and parts of Africa and Asia. ntsc video plays at 29.97fps at a resolution of 720 × 480 pixels. pal video plays at 25fps at a resolution of 720 × 576 pixels. Although ntsc has a slightly higher frame rate, pal has slightly greater resolution and is closer to the frame rate of film, 24fps. The two formats also have different pixel aspect ratios, as discussed next. The frame rate of film is based upon an integer—24. This is to say that the time base for film accurately correlates to real time. The frame rate for NTSC video, on the other hand, does not accurately correlate to real time because its frame rate is not based upon a whole number. The frame rate for video is 29.97fps.
24
Chapter 1: Film, Video, and 24p | Video
Before cameras such as the DVX100 came along, many independent filmmakers shot with PAL cameras since PAL’s frame rate of 25fps is closer to film’s rate of 24fps. The PAL format also does not have drop-frame timecode, is easier to deinterlace, and has slightly more resolution. Many people would shoot in PAL and use a product such as Magic Bullet or Nattress Standards Conversion to conform PAL video to a 24p timeline. This discrepancy originated when color television was introduced in the United States. Black-andwhite television broadcasts ran at a whole frame rate of 30fps. When color was introduced to the broadcast signal, the frame rate had to be adjusted to maintain compatibility with the black-andwhite standard to keep the picture and sound synchronized. As a result, running video at 29.97 frames a second without dropping frames does not correspond to real time—in one hour, there is a difference of 108 frames between 29.97 non-drop frame video and real time. Dropping, or not counting, a pair of frame numbers every 66 seconds and 20 frames keeps NTSC video in step with real time.
Resolution, Pixel Size, and Aspect Ratio Pixels (picture elements) are the tiny squares of color arranged on a two-dimensional grid that form an image. The aspect ratio of a single video pixel is its width relative to its height. One would imagine that video pixels would be perfectly square like the pixels on a computer screen. This could not be further from the truth! The NTSC and PAL digital video formats have rectangular (also referred to as non-square) pixels. A 4:3 NTSC video is 10 percent narrower than a computer’s square pixel, whereas a 4:3 PAL video is roughly seven percent wider than a computer’s square pixel. These formats have rectangular pixels because of recent broadcast technology. NTSC video used to be 640 × 480 or 648 × 486. In the 1990s, the NTSC D1 video standard was defined to be 720 × 486. By packing more discreet blocks of resolution, more detail was made available. When DV and DVD were defined, however, 720 × 480 was considered preferable to 720 × 486 mostly because the compression algorithms used in DV and DVD rely upon DCT compression algorithms which work on 858 pixel blocks.
Chapter 1: Film, Video, and 24p | Video
25
1920 × 1080 HD
1280 × 720 HD
720 × 576 (PAL)
720 × 480 (NTSC)
Figure 12: Comparing the relative size of different video standards
Video Aspect Ratios Nearly all NTSC and PAL video is created with a 1.33 aspect ratio. This aspect ratio is referred to as the “standard” because it has been used for decades. Today, most plasma and LCD televisions that support high-definition video (HD) have a 16:9 or a 1.78 aspect ratio. This aspect ratio is referred to as widescreen because it is closer to film aspect ratios. With the Mini DV and DVD-Video standards, standard-definition video (SD) can also have a 16:9 aspect ratio. If you shoot at 16:9, you can preserve more resolution when doing a videoto-film transfer because if you shoot in 4:3, you will have to crop the frame vertically. Shooting 16:9 also gives you more of a film look, and in the future, when televisions are mostly 16:9 rather than 4:3, your footage won’t be pillar-boxed, with black bars on both sides of the frame, which is how 4:3 footage is fit into a 16:9 display. Video footage at 16:9 is created by a video camera with a native 16:9 CCD or with an anamorphic lens adapter. The anamorphic process compresses the video image horizontally to a 4:3 video file that is stored on tape. During the capture process, you flag the video as anamorphic and the non-linear editor (NLE) uncompresses the video back to the 16:9 aspect ratio during playback. If the viewer owns a 16:9 television or watches the dvd on a computer screen, the video is shown in 16:9. If the television is 4:3, the dvd player will letterbox the video, that is, put horizontal black bars across the top and bottom so the video fills the television screen.
Interlacing The largest drawback for NTSC and PAL video from a production and aesthetic standpoint is that their playback is interlaced. Video has been interlaced since the beginning of television. Three
26
Chapter 1: Film, Video, and 24p | Video
factors determined the frame rate for NTSC video: bandwidth constraints, AC current, and the introduction of color television.
Interlaced
Progressive
Figure 13: Interlaced vs progressive footage
As was mentioned earlier, flicker disappears when a moving image is refreshed at least 48fps. The initial goal for television was to have the frame rate be 60fps, which easily produces an image without flicker. 60fps was also chosen because it matches the frequency of AC electrical current, which is 60 hertz or cycles per second. Since cathode ray tube (CRT) televisions rely upon electricity to a great degree, having the timing of the display match the electrical current simplifies a lot of things. Unfortunately, broadcasting 60fps consumed too much bandwidth, and thus the engineers went back to the drawing board and came up with the method known as interlacing. Interlacing is the process of splitting each frame into two separate fields. Each field contains half of the vertical resolution of the original frame. It’s as if the image is sliced horizontally into many layers, and the even slices create one field and the odd slices create the other. Since each field has half of the original resolution, it occupies half the bandwidth, and since the image is refreshed 60 times a second, flicker is not as noticeable. NTSC video is broadcast at 29.97fps or 59.94 fields per second. PAL video is broadcast at 25fps or 50 fields per second. The Europeans went for consistency with their film production methods, which call for film to be shot at 25fps and to match the AC current in European countries, which is 50 hertz. While interlacing saves transmission bandwidth and produces smoother motion due to its higher recording rate, it produces a less-detailed image than progressive video. Interlaced fields are interleaved and recorded 1/60 of a second apart from one another. This shakiness is most noticeable when freezing on a frame of interlaced video of a quick motion like a bouncing ball. Before cameras such as the DVX100 and the XL2, most SD video cameras produced interlaced footage. With these new cameras, progressive footage can be shot and is better for film outs, bluescreen and greenscreen compositing, and compression for DVD and streaming video. Converting Film to Video The machine that converts motion pictures into interlaced video is a telecine. In order to get the smoothest playback, the telecine uses a 2:3 cadence to record the first frame on two video fields and the next frame on three fields. This cadence splits four film frames into five interlaced video frames or 10 fields. As a result, the first four frames of progressive source material are patterned
Chapter 1: Film, Video, and 24p | Video
27
as such: AA BB BC CD DD. Notice how the first and third frames are repeated twice, while the second and fourth frames are repeated three times—hence the name 2:3 pulldown. Original progressive 24 fps material is captured in camera.
After processing, the film is run through a telecine where 2:3 pulldown is applied; creating 2 fields from the first frame and 3 frames from the second.
Telecined video is now interlaced and runs at 29.97 fps. Upper fields are dark. Lower fields are light.
Jitter frames
Figure 14: Telecining video
24p The 24p format satisfies the needs of both those going to film and those requiring progressive footage for other reasons. The format was created because digital video has several economic and efficiency advantages over film and because NTSC is a poor originating format. Going from NTSC to film, or from NTSC to PAL, creates noticeably jarring artifacts within and between frames.
History of the 24p Format But before we work with 24p, let’s discuss how 24p began, how it works, and its advantages. The 24p format began with HD video. Experiments with HD began in the Sixties. It was not until the mid-Eighties that the Japanese had something that was remotely usable for production. The real milestone came when Sony launched HDCAM, an all-digital format for video in 1997. With this format, television producers had resolution that could finally attempt to rival film. While the format quickly gained popularity, it was modified slightly. Initially it was 1920 × 1035, but it was changed to 1920 × 1080 when the Society of Motion Picture and Television Engineers (SMPTE) introduced HD broadcasting standards. HD Modified to Suit Filmmakers Despite the initial acclaim of HD, it was not going to be the consumer storm that VHS was or even DVD has been. Quite simply, the cost to both broadcasters and consumers is high, and it will be some time before we see HD televisions in the majority of households. HDV may help accelerate consumer adoption of HD. Sony wanted to make money from the initial investment, so they were easily persuaded by the digerati in Hollywood (most notably, George Lucas) to engineer an HD camera capable of recording 24 progressive frames per second. Their reasoning was that they wanted to improve the
28
Chapter 1: Film, Video, and 24p | 24p
production pipeline. Since their films were increasingly edited and enhanced digitally, they were essentially asking to remove the laborious steps of developing and digitizing film negatives. The result was the DSR-900 or CineAlta camera. Panavision created the prime lens and the camera package which were used to shoot Star Wars Episode I. Once the kinks were worked out, the production was able to shoot more setups in a day and remain on schedule, shooting 30 or more setups a day for 60 days. For a rough comparison, a traditional production shoots one minute of the actual film in one day. A lighter camera that doesn’t require film loading cuts motion picture production in half and allows for more setups and takes. HD production also has the advantage of being able to show the footage’s native resolution, as it is being shot with an HD monitor. Even with expensive film cameras, the video tap does not show enough detail to spot mistakes such as improper focus or small problematic details. Additional HD Solutions Followed Additional HD cameras have followed from Dalsa, Kinetta, Panasonic, and Viper. Most notably, Panasonic introduced the Varicam. It not only shoots 24fps progressively, but it can also be undercranked to 4fps or overcranked to 60fps. Apple and Panasonic have made editing footage from the Varicam relatively painless for Final Cut Pro users by adding support inside Final Cut Pro for capturing Varicam footage using the AJ-HD1200 DVCPRO HD deck over FireWire. Panasonic has also released the HVX-200, an all-in-one SD and HD camera that can record 480i, 480p, 720p, 1080i, and 1080p. It writes to Mini DV tape as well as P2 solid state media. 480i and 480p can be recorded to tape and P2 but DVCPRO50 480i and 480p as well as DVCPRO HD 720p, 1080i and 1080p can only be recorded to P2 media or one of the hard disk recorders by Focus Enhancements or nNovia.
Chapter 1: Film, Video, and 24p | 24p
29
Figure 15: Panasonic’s AG-HVX200 offers filmmakers DVCPro 25, 50, and 100 all in one camera.
The Digital Intermediate: Having It Both Ways For those who must still originate on film (currently the vast majority of major motion picture production), another solution is to produce a digital intermediate (DI) from the film negative. It is scanned at a high resolution, sometimes downconverted to standard resolution for editing, and reconnected to the high-resolution source for effects and color correction before being recorded back to film using a film recorder.
24p Brought to the Masses Panasonic created a revolution when they introduced the DVX100 and the AJ-SDX900 cameras. One should remember that these SD cameras are not like 24p HD cameras. While the HD cameras can truly record 24p material straight to tape and require no special conversion by NLEs to be seen as 24p, these standard definition cameras do. We will cover what these cameras do to cajole a progressive image out of these frames, but in the meantime, here’s a list of the cameras and their current street prices.
30
Chapter 1: Film, Video, and 24p | 24p
Panasonic DVX-100B
Canon XL2 Figure 16: Canon Xl2 and Panasonic DVX-100b Cameras
24p Standard Definition Formats 24p cameras shoot in both interlaced and progressive modes. Interlaced footage is shot at 59.94 fields per second, or if you prefer to refer to the footage in frames, 29.97fps. Progressive footage is shot at 29.97fps or 23.976fps. In either mode, video is captured progressively at 24fps. Pulldown is then applied in camera to convert the frame rate from 24fps to 29.97fps before being recorded to tape. The cameras offer two methods for applying pulldown: 24p standard and 24p advanced. The standard pulldown is the same 2:3 or 3:2 pulldown method used when transferring film to NTSC video. Both modes shoot the video as 24p progressive video, but they both go through an internal telecine in the camera that records the video as 29.97 material to tape. 24p Standard The 24p standard mode applies the same 3:2 pulldown cadence used when film is processed by a telecine for television broadcast. This mode is fine for video intended to be broadcast, but if this is not your goal, you are better served by shooting in the advanced mode. Despite offering the smoothest conversion between 29.97 and 23.976fps, the integrity of the original progressive frames is sacrificed for compatibility with 29.97 material.
Chapter 1: Film, Video, and 24p | 24p
31
Original progressive 24fps material is captured in camera.
Resulting interlaced 29.97fps footage is recorded to tape.
2:3 pulldown is applied; creating 2 fields from the first frame and 3 frames from the second.
Original progressive 24fps material is recovered in NLE.
Jitter frames are decompressed and fields from each are used to recreate the third frame.
Jitter frames Upper fields are dark. Lower fields are light. Figure 17: 24p standard cadence
Looking at the cadence diagram makes this more evident. The standard mode compromises the integrity of every third frame in the original progressive source because the original progressive frame has to be recreated by recombining fields from two interlaced frames. This is not as clean as the advanced mode because both frames have to be decompressed and then recompressed to create the third frame. 24p Advanced The 24p advanced mode employs a pulldown method of 2:3:3:2. As with the standard mode, or 3:2 pulldown, the cadence begins by recording one frame onto two fields and the second frame on three fields. Instead of recording the third frame onto three fields, it is recorded onto two fields, and the fourth frame is recorded onto two fields. When the original frames are mapped to fields, the pattern is AA BB BC CC DD. Original progressive 24fps material is captured in camera.
Resulting interlaced 29.97fps footage is recorded to tape.
2:3:3:2 pulldown is applied; creating 2 fields from the first and fourth frames and 3 frames from the second and third.
Upper fields are dark. Lower fields are light.
The jitter frame is discarded when the 2:3:3:2 pulldown is removed by the NLE.
Jitter frame
Figure 18: 24p advanced cadence
32
Chapter 1: Film, Video, and 24p | Digital Video Formats
Original progressive 24fps material is recovered in NLE.
Now the excitement around the advanced mode is that this cadence faithfully encodes the full progressive frames into an interlaced signal. The advance mode’s cadence, 2:3:3:2, is subtly different than the standard pulldown of 2:3:2:3. This difference in rhythm allows for all the original frames to be recovered intact from single interleaved 60i frames. An NLE that understands the mode’s pulldown pattern throws away the “23” frame (the BC frame in Figure x) and uses the remaining frames to restore the original progressive footage. A Small Caveat The advanced mode allows an NLE to reconstitute the original progressive frames, but it is not interchangeable with material shot at 60i. Since this cadence spreads fields from the second frame across two frames of video, it has an ever so slightly noticeable jitter when played back at 60i. So the common recommendation is that this mode should be used only when you are adhering to a progressive workflow. If you
shoot in advanced mode and edit at 29.97fps, the viewer may notice these subtle motion artifacts. The advanced mode is intended for workflows that remain at 23.976fps for the purpose of being blown up to film or compressed at 23.976fps for progressive playback on a DVD, on a local desktop, or over a network via streaming.
Digital Video Formats When shooting a project that you intend to edit and distribute at 24fps, you can shoot using a camera that supports a 24p workflow, or you can shoot in NTSC at 60i or at PAL at 50i and use software in post to conform the footage to 24fps. The following section is a tour through the various flavors of digital video—from SD to HD and beyond.
Standard Definition (SD) SD video is what we all have grown up watching (or saw develop if you happen to be a few generations back). In most cases this is 720 × 480 (or 486 if you work in broadcast) in NTSC and 720 × 576 for PAL. DV The DV format is largely responsible for the proliferation of low-cost digital video production cameras, decks, software, and workstations. Several consumer electronic manufacturers developed the digital video cassette (DVC and now DV for short) format in the early Nineties. DV is also referred to as miniDV, but that is most likely because of the tape size. DV is a great acquisition medium for the budget-challenged filmmaker or for the filmmaker who needs smaller gear. The tape costs are low, the cameras get better every year, and the digital files can be acquired and edited on relatively low-cost hardware compared to a system of five years ago. The format is SD (720 × 480 NTSC, 720 × 576 PAL), offers moderately good color sampling (4:1:1 for NTSC, 4:2:0 for PAL), and supports standard (4:3) and widescreen (16:9) aspect ratios. The video portion of the DV codec is lossy and is similar to motion-JPEG. The codec offers a 5:1 compression ratio, and the required bit rate for capture is 3.6 megabytes a second, so storage and transfer speeds are not an issue on a modern system.
Chapter 1: Film, Video, and 24p | Digital Video Formats
33
DVCAM and DVCPRO25 DVCAM is Sony’s professional version of DV, and DVCPro25 is Panasonic’s professional version of DV. Both formats use the same color sampling, quantization, and codec as DV. The main differences are tape speed (DVCAM is faster than DV, but DVCPro25 is faster than DVCAM), range of tape sizes, and in support for the SMPTE broadcast timecode format. DVCAM and DVCPro25 cameras also tend to have better lenses and controls over picture and sound than their DV siblings. A DVCAM or DVCPro25 deck can usually read and write to a DV tape and read the competing format’s tape but not write to it. Digital Betacam Digital Betacam is not a digital version of analog beta but a new digital format using the tape format and brand cachet of Sony’s popular analog professional format, Betacam SP. It offers a very high-quality picture, and footage can be shot at eight bits or 10 bits per pixel at a sample rate of 4:2:2. It also uses DCT compression, but it is set to a 2:1 compression ratio yielding near-lossless quality. Given its high sampling and quantizing rates and low compression ratio, D-Beta requires expensive decks, hard drives, and video capture cards to be used effectively. As a result, it’s a popular among stations and post houses, but it is not something that the independent producer would use. DVCPRO50 As the extension to its name implies, DVCPro is twice the bit rate in megabits per second (Mbps). It is also uses 4:2:2 color sampling. Panasonic’s AJ-SDX900 is a remarkable DVCPro50 camera that shoots 16:9 or 4:3, 60i and 24p, and DVCPro25. The 24p footage shot with the camera looks amazing and holds its own when upconverted to HD or output to film.
High Definition (HD) HD is not as cut and dry as DV because there are several standards. To illustrate this point, a friend of mine, a filmmaker whose short was shown at a prestigious film festival, recounted a conversation he had with the festival’s projectionist. My friend had shot his piece with the Panasonic Varicam and did a film out for the festival. When he saw a digital projector capable of showing HD in the projection booth, he wondered why he went to the trouble (and cost) of doing a film out. He said to the projectionist, “Why couldn’t I have brought my short on HD tape?” The projectionist replied, “Because there’s too many standards.” My friend cut in, “Well, can’t you show 1080p on that thing?” The projectionist shrugged and said, “HD is always changing. It’s a moving target.”
34
Chapter 1: Film, Video, and 24p | Digital Video Formats
Figure 19: Panasonic’s Varicam was one the first HD camera offering variable frame rates
HD Formats Used in Production The two most prevalent HD production formats are Sony’s HDCAM and Panasonic’s DVCPRO100/HD. Two mastering formats are Panasonic’s D5 and Sony’s HDCAM SR. HDV, the prosumer format supported by Sony and JVC, is discussed in the next section. There are additional cameras such as the Viper Stream, the Dalsa Origin, or the Kinetta, that capture a frame that is large enough to be called HD, but they record to disk or an external computer and not to a format such as DVCAM or DVCPRO100. HDCAM HDCAM, Sony’s professional HD format, was initially an interlaced format, but it added progressive frame rates when filmmakers fell in love with its picture quality. Technically speaking, HDCAM supports eight bits per pixel and samples color using a 3:1:1 ratio, which means the color detail retained horizontally is one-third of the brightness detail retained. The bit rate for HDCAM is 135Mbps, and its compression ratio is 7:1. The codec is similar to DV but is proprietary to Sony. Chapter 1: Film, Video, and 24p | Digital Video Formats
35
DVCPRO100/HD Panasonic’s HD format goes by a few names. It was originally called DVCPRO100, but anything with HD attached to became in vogue, and so Panasonic marketing changed the name to DVCPROHD. The 100 stood for its data rate, 100Mbps. The Varicam’s display resolution is 1280 × 720 pixels, but the actual native resolution is 960 × 720 pixels. It samples fewer horizontal pixels for luminance (its sampling rate is 4:2:2), and its compression algorithm is a higher-quality variant of the DV codec.
High Definition Video (HDV) The history books are still being written on HDV and its contribution to digital production. By the time this book is published, we will probably have three to five HDV cameras and there will be many people producing feature-length documentary and narrative films using HDV cameras. The main takeaway for the HDV format is that you get one of two widescreen HD resolutions on DV cassettes with FireWire connectivity to your NLE of choice. While this sounds like a great concept, there are caveats worth mentioning.
Sony HDR-FX1
JVC GY-HD100 Canon XL H1 Figure 20: Sony’s HDR-FX1, JVC’s GY-HD100, and Canon’s XL H1 are the leading HDV cameras today
• HDV uses long group of pictures (GOP) MPEG-2 compression. While MPEG-2 video from a professionally compressed DVD looks fantastic on your television, it is not ideally suited to being a production format due to its unforgiving nature when trying to make edits on anything but an I-frame. Converting to a more lossless version introduces additional compression artifacts, requires additional transcoding time before you can begin editing, and requires additional storage and processing power to work with HD-sized video.
36
Chapter 1: Film, Video, and 24p | Digital Video Formats
• It samples color at 4:2:0. That means there are samples every pixel in each of the four lines for luminance but only two for the first and third lines for color. The second and fourth lines share the color information from the first and third lines. This means you better light your subject as best you can as there’s not a lot of wiggle room for you in post. Pulling a key given this color sampling (not to mention compression and interlaced mode) is even more challenging. • The format supports two resolutions: 1280 × 720 at 30p, and 1440 × 1080 at 60i. Yes, there is currently no native support for 24p. JVC’s camera, the GY-HD100, has new format, ProHD, that uses HDV compression, but records at 23.976fps to tape. Both the Canon XL H1 and the Sony The 1080i HDV format is not like the broadcast HD 1080i format. HDV is 1440 × 1080, while the broadcast HD format is 1920 × 1080. I welcome the resolution and price point of HDV, but given its compression methods I think of it as a temporary format until Sony, Panasonic, or someone else, offers us real HD resolution, frame rates, and acceptable compression at a similar price. The Benefits of Shooting HDV Some might say that shooting HDV is a waste of time. Regardless of what you are told, shooting HDV is a great way to produce high-quality SD content. Let’s face it, SD television sets and DVD players are going to be around for a while. If you are not seeking a film out, or mastering to HD, shooting in HDV and sampling down to SD looks a lot better than shooting in SD. Also, since HDV is only 16:9, you can automatically produce anamorphic or widescreen content. If you want to keep a 4:3 aspect ratio, that’s as simple as cropping to 4:3 before you scale down. By starting with HDV and finishing to SD, you have more creative options. With the extra resolution, you can safely crop a shot or do subtle pans and tracking shots. You simply don’t have the resolution in SD to do this without looking like you scaled up the SD footage. Current Cameras Supporting HDV While the next few years will see a number of new camcorders supporting HDV for the consumer and professional, here is a brief list of what is available now. • JVC JY-HD10U. The first HDV camera sold. It’s a single-chip (one CCD) camcorder that records 1280 × 720 at 30p. While it piqued people’s interest, it didn’t set a wildfire like the DVX100 did. • JVC GY-HD100. A new camera that records true 24p onto DV tape using 1280 × 720 HDV video. It also sports interchangeable lens. • Sony HDR-FX1 and HVR-Z1. These cameras are the HDV cameras selling like wildfire. What’s driving this is that they shoot 1440 × 1080 at 60i using three CCDs one-third of an inch wide. The HDR-FX1 is the consumer version, and the HVR-Z1 is the prosumer version that shoots NTSC and PAL HDV. There are many comparisons between these cameras and Sony’s original DV camera, the DV1000, in terms of driving adoption to a new standard. • Canon XL H1. This camera is a lot like the Sony HDR-FX1 in that it shoots 1080i HDV and offers a faux 24 frame mode that is not true 24p. The killer feature for this camera though is that it offers a HD SDI port for capturing straight to a card capable of capturing an HD SDI signal.
Chapter 1: Film, Video, and 24p | Digital Video Formats
37
• JVC GY-HD7000. This camera is aimed more at the news station videographer. It has three CMOS sensors two-thirds of an inch wide and shoots 1080i to disk and 720p (24p and 30p) to tape and disk.
The End of Tape Acquisition All the previous formats record to magnetic tape. While tape is cheap, it is prone to dropouts caused by heat, moisture, or dust. It does not allow for capture speeds beyond real time, and it is not reliably reusable. In addition, it adds another complex mechanical system to cameras and VTRs. Companies like Panasonic and Sony are working on new mechanisms that record video to solidstate memory or to an optical disk. Companies outside the mainstream such as Kinetta, Thomson, Arri, and Dalsa are offering cameras that record to a computer or hard drive.
P2, Panasonic’s Solid State Format Panasonic’s foray into tapeless acquisition is P2. It stands for Professional Plug-in. It’s a PC Type II card with four SD memory cards ganged together inside. As funny as it sounds, it is as if they created a RAID from the memory cards you would find in a digital still camera. What’s enticing about this format is the number of portable computers with PC card slots and the absence of a tape mechanism on the P2 camera models. Initial cards are in 2GB and 4GB, but larger cards will be developed as Panasonic figures out how to squeeze more memory onto SD cards.
Figure 21: P2 recording media (image courtesy of Panasonic)
XDCAM, Sony’s Optical Disc Format Sony’s next-generation acquisition format, XDCAM or “Professional Disc,” is built around Bluray discs that use “blue” lasers to read and write to higher-density optical disks. Initial storage ca-
38
Chapter 1: Film, Video, and 24p | Digital Video Formats
pacity is somewhere around 23GB, and transfer rates start at 72Mbps for cameras equipped with one laser and 144Mbps for cameras with two. Such a high transfer rate is more than adequate for DVCAM and even Sony’s higher-quality SD format, MPEG-IMX, which runs at 50Mbps. It’s hard to say right now what is the better format. People on the Panasonic side cite the small form factor, speed, and simplicity of a solid-state recording mechanism. Proponents of XDCam point to the opinion that optical disc is a better archival format, more economical, and will become more standard as Blu-ray discs become commonplace in computers and consumer video players.
Figure 22: XDCam (image courtesy of Sony)
Direct-to-Disk and Memory Recording Companies outside the mainstream business of selling tape or other forms of recordable media are blazing trails in recording to literally whatever they can—hard drive, straight into an NLE, or boutique devices known as digital field recorders (DFR) which are either a bank of solid-state memory chips, or redundant array of inexpensive disks (RAIDs).
Recording Alternatives for SD Camcorders Regardless of the format you shoot, the ingestion, or time spent logging and capturing clips is a laborious and time-consuming task that can seem daunting. A DVR makes video production more efficient by capturing video immediately as you shoot it. These products get you in front of your footage editing more quickly by creating clips that are compatible with your NLE. The video is still recorded to tape as a backup.
Chapter 1: Film, Video, and 24p | Digital Video Formats
39
Hardware DVRs Hardware DVRs consist of a hard disk, a FireWire controller, and a simple operating system that reads a DV stream over FireWire and writes the DV file to the unit’s hard drive in the digital video file format of your choice. Some units, such as Focus Enhancements Firestore FS-4, are totally self-contained, include a hard drive, are battery-powered, and mount to a camera or belt. Other units are more for studio setups and require you to hook a hard drive to the unit as these models provide only a pass-through.
Figure 23: Firestore FS-4 attached to a Canon XL2
Software Digital Video Recorders DV Rack by Serious Magic is a software-based DVR. While you can theoretically connect your DV camcorder to your log and capture window in your NLE, the log and capture functionality provided in your NLE most likely doesn’t listen to the start and stop signals created by your camcorder. DV Rack, however, does listen to these signals.
40
Chapter 1: Film, Video, and 24p | Digital Video Formats
v Figure 24: DV Rack
Hacking the Panasonic DVX100A and Canon XL2 Reel-Stream LLC took notice of the progressive CCDs in the 24p cameras offered by Panasonic and Canon and decided to build the Andromeda, a device that hijacks the image recorded by the CCD before it is crushed by the camera’s DSP. What this gives you is a much better image—up to full 4:4:4 color sampling at 10 or 12 bits per pixel, higher resolution (up to 1540 × 990), and an image with no compression. The Andromeda adds a USB2 port to the side of your camera and this means that you need to tether it to a recent Powerbook (or other recent Macintosh computer) with a fast hard disk. This setup is not ideal for the documentary shooter who prefers run and gun style shooting, but if you shoot primarily in a studio or you have the crew to support the workflow, it is incredibly cost-effective given the pristine quality the Andromeda offers.
Chapter 1: Film, Video, and 24p | Digital Video Formats
41
Figure 26: Andromeda USB2 port added to a DVX-100 (image courtesy of Reel-Stream)
To learn more about the Andromeda, see sample footage, and participate in their forums, visit: www.reel-stream.com.
42
Chapter 1: Film, Video, and 24p | Digital Video Formats
24p Preproduction Before producing a film, time is best spent planning, or as put by filmmaker JeanPaul Bonjour, “Think before you shoot!”
Chapter 2: 24p Preproduction | 24p Pre-Production
43
Preproduction A film is developed in four stages: preproduction, production, postproduction, and distribution. You plan in preproduction and shoot in production. You edit footage, color-correct, add titles, and create effects in postproduction. You get your film out to the world in distribution. This chapter covers the business, logistical, and creative planning required for a film to run smoothly. Budget and schedule
Raise funds
Buy insurance
Incorporate
Pick your cast and crew
The sooner you start, the sooner you can stop worrying about money.
Protect yourself and the people around you from physical and legal harm.
Incorporate if needed and establish expectations for how people will be compensated
For documentaries, identify the documentaries’ primary subjects.
1
2
3
4
5
Design your film’s look
Rehearse and refine
Choose a format
Break down the script
Procure equipment
Create storyboards, select wardrobe and makeup, find locations, construct sets, and acquire props.
Take time to have your actors read the script. Sometimes a line that reads great when written falls flat when performed. Rewrite as neccessary.
Select the camera for shooting. Develop an informed notion of how the film will appear visually—the color, saturation, and contrast.
Develop shooting schedules, and perform additional logistical tasks
Get everything you need to shoot and edit your film. You’ll work with rental facilities and borrow gear from friends.
6
7
8
9
Figure 1: The stages in producing a film
Preproduction begins just before or soon after the script or treatment is complete. Although the script or treatment may change unexpectedly, the time between completing the script and beginning principal photography is best spent planning. Planning before production reduces the chance of mistakes being made. If Murphy’s law comes into play during the shoot, you’re prepared to roll with it. More specifically, the advantages are: • You created a safety net. You have the releases, agreements, and insurance you need to produce the film without the fear of being sued or spending your peak creative years working the chain gang. • You have the tools and resources to complete your film in a timely manner. During the shoot you can concentrate on giving direction rather than making desperate calls to the rental house. • You set expectations for your cast and crew. Your vision for the actors’ performances and the film’s look is understood by all.
Business Activities In many ways you need to be both an artist and an entrepreneur to make films. This is not about making money, but about undertaking an endeavor that should operate like a business. To that end, I suggest getting a little savvy about managing people and resources, because you want to finish your film without going to jail, losing your life savings, or ruining your reputation.
44
Chapter 2: 24p Preproduction | 24p Pre-Production
10
Budgeting A budget details the costs for completing your film. These costs are the equipment and supply purchases, cast and crew salaries, and the fees associated with rentals, insurance, permits, and postproduction. Inexperienced filmmakers will leave out key line items like insurance, underestimate costs, and spend the money unwisely. You want the budget to be comprehensive, realistic, and appropriate. If you are a first-time filmmaker, enlist the support of someone with experience in preparing budgets. What’s Included in a Budget In a budget, you should include both the things you are paying for, the things you are receiving as donations, and the things you are deferring. Prepare it at the beginning and include all stages of production. This will encourage you to think through the entire project, ensure that nothing crucial is forgotten, and set a financial target for you to meet. Preparing a complete budget is even more crucial for those seeking grants, since foundations require a completed budget as part of the application process. “Above the line” and “below the line” are two pieces of film production jargon that refer to a way of segmenting the cost of a big-budget film. Above-the-line costs cover the compensation for the director, producer, and key talent. Below-the-line costs cover all the production and postproduction expenses for crew, equipment, and services. To learn more about budgeting, look at Film & Video Budgets by Deke Simon (Michael Wiese Productions) or IFP/Los Angeles Independent Filmmaker’s Manual by Eden Wurmfeld and Nicole Shay Laloggia (Focal Press). Compensation As an independent filmmaker, compensating cast and crew is a challenge given the other costs associated with producing a film. Compensation can, however, be in many forms beyond cash, and so remember to mention these when negotiating with cast and crew members. Working on an independent film is a refreshing change from the bone-dry drudgery of corporate work, it can be the opportunity to showcase one’s own acting or technical talent, to network, or to be fed for some time. On bigger-budget productions, you will have to set up a business, pay salaries, taxes, and disability insurance. Hire an accountant to handle all of this for you. Depending upon the budget and the agreements you have with the cast and crew, some compensation can be deferred until the film makes a profit. In the production of short films, compensation is not as crucial since everyone understands that short films almost never make money and because the production is done in one or two days. In this scenario, a small stipend, good food, a real opportunity to do something creative, and your heartfelt appreciation and gratitude will make up for the difference. If you and your crew are weekend warriors, gratitude and mutual reciprocation will suffice because you all have the security of a day job. When working with struggling full-time actors or freelance crew who don’t have the security of a bimonthly paycheck with benefits, you should feel obligated to compensate them appropriately because this is their livelihood. Chapter 2: 24p Preproduction | 24p Pre-Production
45
Fundraising There are no rules when it comes to fundraising because each fundraising scenario is uniquely bound by the nature of your project and the sources from which you are seeking funds. All the aspects of being a good filmmaker apply to being a good fundraiser: creativity, stamina, patience, and perseverance. If you don’t ask for money, you certainly won’t get it. Think of it this way: Someone is going to get it, and it might as well be you. Remember, investing in a film or contributing to documentary is a tax write-off and can be an investment, if that is your intention. Where to Look Start with people you know. Fundraising is not just about raising cash but also about receiving in-kind donations such as services, goods, and rentals free of charge or at a substantial discount. If you need to travel, people can even donate airline mileage to you. Be creative and think outside the box when asking for help. Research potential funding sources. Meet with board members and wealthy donors of the foundations you are targeting.
Grants and Proposals Tailor each proposal according to the foundation to which you are applying. While this is extra work, it is more successful than the one-size-fits-all approach. Make your proposal passionate, realistic in scope, focused, and polished. The package you submit should be cohesive, and the presentation materials must compliment one another. This is a good time to enlist the skills of a friend who is a graphic designer and another friend who is a skilled at writing and marketing. A polished proposal has good grammar and spelling, consistent cross-references, and looks professional. When researching sources, find out what subjects they tend to fund. Which funder will find your subject matter appealing? Are you going after independent institutions, national foundations, or individual contributors? Look at annual reports for companies and foundations. They often indicate how they donate their money and how much they donate. The Foundation Center is a great place to research funding sources. They have offices in major cities and an excellent web site: www.foundationcenter.org. A proposal package should always meet the guidelines established by the foundation. Some foundations are really persnickety and want things done their way or no way at all. But most are looser in their requirements and appreciate a proposal that is well put together. The elements you need in an effective proposal are: • A personal note. This is a short note to the commission and signed by you. Ideally you know name of the person receiving the submissions, and you can address it to their attention. • A “two-pager.” The two-pager is an introduction to your film. It also explains to the reader why they should care about your project.
46
Chapter 2: 24p Preproduction | 24p Pre-Production
• A budget. Commissions want to know how much your project will cost, what you plan to do with the money, and how much you have raised thus far or will receive through in-kind donations. • A fundraising plan. How much are you asking them for? Who is your fiscal sponsor? Who else are you going to ask, and what other methods beyond grant proposals are you using to raise completion funds? • Letters of endorsement. These are letters from people who have read your proposal, seen your footage, and believe in the project. They should be credible and well-known in their fields. They are vouching for you and your project. • Biographies of key personnel. Who is behind the project? What else have they worked on? • A sample tape. This can be a short trailer for the project. Take this seriously and start the tape with something powerful. Most reviewing committees have a practice of raising their hand as soon as they get bored with a tape. They’ll give most films a chance, but remember that these people have to watch hundreds of sample tapes. So don’t put your best material last. Cost-Saving Tips
Take Advantage of Filmmaking Incentives
Making films is not cheap, if you haven’t already found out. The primary ways to stretch your filmmaking dollar are to borrow, exchange services, use incentives, or do it yourself.
Many city and state governments offer incentives and assistance for filmmakers. These incentives include tax breaks, discounts on permits, assistance with security, or location scouting. Joining a nonprofit film organization can also get you discounts on rentals and supplies. For example, Film Arts Foundation in San Francisco has a great weekend rate where you can pick gear up on Friday afternoon and keep it until Monday morning. Film Arts has also made arrangements with many film rental houses in the Bay Area to offer discounts on expendables and rentals to members.
Borrow or Do Favors Networking with other filmmakers opens up the opportunity to borrow or rent equipment from one another. If you have a skill you can offer to a fellow filmmaker, exchange your services with them for help and equipment or promise to help each other with projects. My experience with collaboration in projects has been invaluable. I have helped several friends with the design, animation, and web presence for their films. I have loaned out equipment and been an effects supervisor, art director, and production assistant in others’ projects. For other friends, I’ve been a person who they could call for technical advice. All of my involvement has been repaid. Not only have I gained friendships and free help, but I’ve learned from watching others and expanded my base of filmmaking colleagues.
Do it Yourself A lot of basic filmmaking gear can be built from a few trips to the hardware store. The Internet has many resources for building gear cheaply and locating low-cost alternatives. If you cannot recoup the costs by doing commercial work with the equipment, I strongly advise against purchasing it. While it’s tempting to max out your credit cards and get the dream editing system and monster camera kit, you will probably be better off renting.
Chapter 2: 24p Preproduction | 24p Pre-Production
47
Scheduling Producing a high-level schedule helps you plan and reserve resources so you have them when you need them. You should create a schedule that starts with preproduction and follows with production, post, and distribution. An effective schedule lists all the people, resources, and deliverables for a film. It can help you determine the amount of work required by certain crew members and help you define the compensation based upon the amount of time one is working on the film. The next few sections describe how to break down a script by hand in case you would like to do it this way. For short films, this works easily. For longer films, you may want to consider purchasing a movie scheduling program such as Gorilla or Movie Magic Scheduling, which are both discussed at the end of this topic. Script Breakdown and Analysis The first thing the director or the assistant director does is break down the script. Producing a film takes an incredible amount of organizational energy. What starts this frenzy is the chore of breaking down the script and listing all the items required to complete the film. This grand list is then factored into a shooting schedule and several other production checklists used by the crew. This work is not related to the script analysis the director should do before rehearsing with the actors or before shooting if there is no rehearsal. Segmenting the Script Scene by Scene Segmenting a script requires that each scene in the script be numbered. This is done after the script’s final revision. An original script differs from a shooting script in that the original script may mention four to five scene changes in a three-sentence narrative description. The shooting script untangles all of this into distinct and numbered scenes. If you are asking, “Why not write a shooting script from the beginning?” the answer is that the additional formatting of a shooting script hampers readability. The original script needs to read quickly and smoothly for evaluation. With a pencil and a straightedge, each page is marked in eighths. An entire page is considered 8/8. For example, a scene that runs one and a half pages is 12/8. This process helps create an organized and accurate estimate for the shooting schedule because scenes that are similar but out of sequence are shot together and because knowing the relative length of each scene helps determine how to arrange them in the schedule.
48
Chapter 2: 24p Preproduction | 24p Pre-Production
Figure 2: After segmenting a script (left), creating a shooting script (right) and shot list become easy
How many setups per day? As a rule of thumb, Hollywood productions shoot a page a day, and so you would be limited to shooting 8/8 worth of scenes per day. Generally, you can shoot a little more than this per day with lightweight cameras such as the Dvx100 or XL2 and a smaller crew. It’s important to remember that shooting a film is exhausting, so don’t run your cast and crew into the ground! So this begs the question, “How much coverage can a production really do in one day?” This question is hard to answer. As a filmmaker you have decide how you want the film told pictorially, and that will indicate how many “setups” are needed. A setup is the relative position of the camera to the talent. When filming two characters, you will most likely want a setup from character A’s vantage point and another from character B’s vantage point. This is two setups as you will have to move the camera into position for each setup. Likewise you may want to get both characters in the same frame, thus adding a third setup. Within each setup you will have the actors do several takes. It all begins to add up quickly: Scene Length × Setups × Takes = Time Required to Shoot a Single Scene Beginning filmmakers will often try to shoot more than they should in a day. This has several negative effects, as you run the risk of burning out your crew, cast, and even yourself. This can actually become counterproductive as takes become less-usable and you have to shoot more. Running a shoot like this means you won’t get through the shot list. This being said, space out the shoot with some padding. In addition, the time you spend in other areas of preproduction will make you comfortable with the material, cast, and crew. This comfort level will produce solid performances and a smooth production for all.
Chapter 2: 24p Preproduction | 24p Pre-Production
49
Creating Scene Breakdown Pages For each scene, a scene breakdown page lists the actors, props, location, and conditions for the scene. They are incredible helpful to the person who is managing film continuity (ensuring that scenes are consistent from one to the next and nothing jarring sticks out to the viewer). The breakdown page also helps you create the call sheets to notify cast and crew when they need to arrive on the set.
Figure 3: Scene breakdown
In the Chapter 2>Script Breakdown folder, is a script breakdown template that runs four a sheet. Print and trim these or buy a package of 3x5 index cards. Scheduling the Shoot Use the scene breakdown pages to schedule the shoot. Start by arranging the breakdown pages by location. Put them in unique piles. Arrange the scenes within each pile by the time of day. Break down each sub pile further by grouping them by character. You want to shoot similar shots back-to-back because shooting in sequence is grossly inefficient. Sets have to be broken down and put back in the exact same state. Lights, dollies, and cameras all have to be repositioned. Actors and crew members have to wait around, or they are intermittently called back to the set. Granted, 24p cameras and compact light equipment is smaller and easier to move, but the time it takes to reposition them and everything else takes more time than having someone do a change of wardrobe. If there are other things to consider such as rented props or equipment that is used across several dissimilar scenes, you might want to organize your shots by when these will be available.
50
Chapter 2: 24p Preproduction | 24p Pre-Production
Shots without any actors in them, such as a city landscape or an establishing shot of a building, can almost be shot at any time and don’t need to impact the principal photography of your film. If you run out of time, these can be always be done on another day and even by someone else with the proper direction. Production Planning Software The last few sections discussed how to do scheduling the old-school way. The new-school way uses movie scheduling software such as Gorilla or Movie Magic Scheduling. With these programs you can export your shooting script, and they will create a database for the actors, props, and scenes. They can help you create script breakdown pages, a shooting schedule, call sheets, as well as many other production-related reports. Getting to know all of the features of either of these two packages can take a better part of an afternoon. However, once you understand all that they can do, using them does get a bit addictive, and you will want to do all your productions this way in the future. Gorilla Gorilla Film Production Software by Jungle Software is a comprehensive film budgeting, scheduling, and contact-management ecosystem. It’s built in FileMaker Pro and has many easy-to-use wizards as well as advanced entry interfaces. It has a Main Menu palette from where all the main features are accessible. It runs on Windows XP, Mac OS X, and surprisingly Mac OS 9. In addition to scheduling and budgeting it has a database for film festivals, managing dailies, casting, rehearsals, storyboards, and accounting. It supports importing scripts from Final Draft and Movie Magic Screenwriter. Once the script is imported, you add the props, scenes, and characters to each scene, and Gorilla assists you in creating the schedule. Given the list price of $399, it is a very full-featured product. For the independent filmmaker looking for a total package, it’s a great deal.
Chapter 2: 24p Preproduction | 24p Pre-Production
51
Figure 4: Gorilla
There is a free downloadable demo that will work for about 15 days but does not allow printing of advanced reports. See www.junglesoftware.com. EP Scheduling EP Scheduling, formerly Movie Magic Scheduling, by Entertainment Partners is strictly a scheduling application. Entertainment Partners also offers a budgeting application named EP Budgeting. The two packages are tightly integrated. The interface for the scheduling program is very streamlined and is based upon the scene breakdown and production board metaphor. It also imports script elements from script-writing software and is a joy to use. It’s considered the industry standard for big-budget productions and has a big-budget price of $699. Entertainment Partners does offer a bundle deal for both EP Budgeting and EP Scheduling, so check out their web site for more information and a 20-day demo.
52
Chapter 2: 24p Preproduction | 24p Pre-Production
Figure 5: EP Scheduling
There is a free downloadable demo that is fully functional for 20 uses. After the trial period, it works in view-only mode. See www.entertainmentpartners.com.
Insurance I would argue that production insurance is something you cannot live without. Most rental facilities, city film commissions, and shot locations require it. Not only should you do it to cover the equipment and locations for your film, but more importantly, you should do it protect yourself and your cast and crew from legal and physical harm. That being said, there are a few types of insurance that provide coverage for these different scenarios. I’ve listed them below. As a disclaimer, take what I have to tell you here lightly and consult with a professional insurance agent who specializes in film and video production for more information. Once you are insured, keep a copy of the policy and the insurance company’s telephone and fax number handy. If you need to add a rental house or property owner to your policy, having this information will save you time and make you look organized. Errors and Omission Insurance Errors and omission insurance covers you from any liability due to not securing content rights to photos, music, or other copyrighted material in your film. Unless you’re working in a complete vacuum and everything is being created from scratch, you should invest in errors and omission insurance if you plan to show your work at festivals or distribute the project theatrically, on video, or on the Internet.
Chapter 2: 24p Preproduction | 24p Pre-Production
53
General Liability and Workers Compensation General liability and workers compensation covers lost or broken equipment, damage to locations, and personal injury. Policies vary with provider, so read all the fine print. You may have to get workers compensation insurance if the cast and crew are your employees. Workers compensation covers personal injury, and each state has its own interpretation of what workers compensation is, so look into this if you incorporate your production. Other Forms of Insurance There are several other forms of specialized insurance used in the film industry. A completion bond protects the interests of investors by promising that the film will be completed. If the film is not going smoothly, a bond company can come in and fire the producer, director, or others. There is also insurance called cast insurance for costs associated when talent drops out of a film. An umbrella policy is a liability policy that includes several types of production insurance under one policy and can be set to cover larger liability costs. As I mentioned earlier, if this is your first time out as a filmmaker, talk to an experienced producer and consult with an insurance agent to find out what you should get coverage for.
Copyright Films are intellectual property just like books, music, art, and inventions. A film, regardless of whether it is a narrative, documentary, or industrial film, cannot infringe on the rights owned by other filmmakers, writers, musicians, or artists. Also, you want to protect your own rights regarding your work. This is meant be informational, and it’s not legal advice. Seek the advice of a lawyer if you have further questions. Protecting Your Rights Protecting the rights to your own material is much simpler than acquiring rights to material owned by someone else. It’s so simple that all you have to do is put your name on it. The difficulty, if it arises, is in proving when you created this work. That’s why it’s a good idea to put your name, the copyright symbol, and date on your work and register it with the Library of Congress at www.copyright.gov/register/. Acquiring Rights When your film includes work produced by others it is in your best interest to secure rights to it. Your film may not be distributed or shown publicly unless you have permission to use quotations, artwork, or music (contracts or written approvals), actor’s performances or crew contributions (release forms), and permission to shoot on locations (permits and written approvals). Music As many consumers have found out recently, downloading music on the Net and not paying for it is stealing. Putting music you don’t own or don’t have the rights to use is essentially the same thing. Make no mistake, the RIAA or someone’s lawyer will go after anyone who irresponsibly uses copyrighted music.
54
Chapter 2: 24p Preproduction | 24p Pre-Production
That being said, synchronization rights are the most common agreement one gets for films using protected music. These grant you the right to play the music in synchronization with the moving images in your film, but that’s all. If you work with an independent composer, this is a pretty painless process. It gets more involved when you want a song from an up-and-coming indie rock band on a record label. However, many record companies realize that filmmakers want to have access to music, and so some of the major labels will give students and independent filmmakers a break on licensing music for festivals. They may ask for more money if your film gets bought and you continue to use the music, which is reasonable. The labels may use the film as an promotional outlet for unknown artists, so don’t turn your nose up if you are offered these. It could be a great boon to your film, and you’re helping another unknown talent out. Releases for Cast and Crew The first thing to do before starting the shoot is to get permission from everyone involved that when they’re working on your production, the work they do is owned by you or the business entity established to produce the film. This goes for everyone, from the cinematographer, production designer, all actors, and extras. For crew members it is about the services or work that they produce for your film. For actors it the rights to their images and performances in your film. On non-paying work, offering meals, credit, and a copy of the completed work may be about the best you can do. On paying or union work, there is a lot more to consider, so contact the local Screen Actors Guild (SAG) for more information on working with union actors. They are rumored to be a bit more indie-friendly, and for short or experimental films that probably won’t make money, they give you a lot of help. To learn more about the Screen Actors Guild and view resources they’ve made available for independent filmmakers, visit www.sagindie.org. Release Forms for Documentary Subjects Anyone who appears in a documentary film should sign a release form or give you a verbal release on camera. Signing the release before a shoot was the old way. Today, it may be done at the end of the interview or after the subject has seen the footage, because they may want to withhold a release if they have problems with the way they are being portrayed. All of this will depend upon the person and the relationship you form with the subject. Anyone who is seen on camera should sign a release, including the people standing close by in frame but not being interviewed. I recommend having copies of a general release form on hand for bystanders to sign. Getting a verbal release on camera also works for informal situations. In these cases, it is more about being polite and respectful. People are often thrilled to learn that they’ll be in your project, but always respect their wishes when they are not. If you are doing a film on someone deceased, you have fewer problems because they cannot sign a release. But remember that someone owns the images or work of the dearly departed and that you will need a release or contract to include that material in your project. Location Release Forms A location release form is different from a shooting permit. A release form is permission to shoot on private property, whereas a shooting permit is permission to shoot in a public space and is granted by a municipal agency such as a city or state film commission. With a location release Chapter 2: 24p Preproduction | 24p Pre-Production
55
form you are asking for permission to shoot your project on the property. It may have clauses regarding fees, giving credit to the property owner, or damage done due to filming. In my experience, most private individuals and small-business owners tend to be welcoming to this sort of thing as long as they can benefit from the publicity. Large businesses such as shopping malls, chain stores, and large office buildings are less enthusiastic about filmmakers, but if you work closely with the local film office, they can help you gain access to such locations. There are several forms available online for location, cast. and general appearance in a video or film at www.vidpro.org/forms.htm Permits When you shoot in a public place with a bunch of gear and equipment, it is best to seek a permit. A permit gives you permission to haul your gear around a public place and shoot footage and can include how many free parking spaces you get and the right to restrict access to the set. Many city and state film commissions have regulations for how filmmakers access public space. Acquiring a permit alerts the police to what you are doing and facilitates requesting one or more officers to stand duty during production to maintain crowds and establish security and safety on set. If you are going to be by yourself with just your camera and tripod, you will most likely not need a permit. If you are approached, you can say you’re a tourist or a student filmmaker. However, if you plan to park a one-ton grip truck across several metered spaces while laying down enough cable to power a submarine, get a permit. Getting More for your Permit Dollar Larger cities do charge a lot for permits. Investigate whether they have rates for student, independent, or documentary films. In addition, look at the smaller surrounding cities. They often are cheaper, more friendly, and work harder for the independent filmmaker. The Oakland Film Commission is a prime example. They can offer a lot of what San Francisco offers at a much cheaper price, and they go out of their way to support and accommodate filmmakers. Keeping the Locals Happy Because you will be potentially disrupting the lives of people who occupy the locations you will be using, be courteous, responsible, and respectful. San Francisco, for example, offers filmmakers a form to fill out and distribute in the neighborhood where filming will take place. This helps the neighborhood understand the scheduling, right of way, and the impact the filming will have on their environment. If you plan to shoot in one place for a long time, consider sponsoring a block party or giving the neighborhood some token of your appreciation.
Security If you are not filming inconspicuously, work with the local film commission to arrange security. This can be a police officer as mentioned earlier or a service suggested by the commission. You don’t want to worry about vandals, overly disruptive onlookers, or anyone with criminal intentions. In some cases, your production insurance or film permits may require security.
56
Chapter 2: 24p Preproduction | 24p Pre-Production
Get a Crew Even though the idea for your film began with you, you cannot do everything yourself. You need a crew—especially during production. However, shooting with smaller cameras such as the Canon XL2 and Panasonic DVX100 can enable you to use smaller crews.
Figure 6: Once you find a crew, keep in touch and help each other out
Network The best thing you can do before completing your script is to network and find other people who share your passion for filmmaking. They act like a support group by critiquing your script, offering production advice, and providing you with a crew and equipment. In San Francisco, for example, there are several non-profit film and video associations where local filmmakers take classes, network, and attend screenings. I have met several people through these events. I have helped people out with their films, and they have helped me out with mine. In addition, I have met filmmakers on mailing lists, and we have gone on to work on each other’s films.
Who to Look for If you wrote your script, you have the option to direct it or find someone else to direct it. Script writing and film directing for documentary or narrative film is well outside the scope of this book. What I can offer, however, is advice on where and how to find a director and writer as well as other members of your crew. Besides the director and writer, your project will need a producer, a director of photography, camera assistants, sound recordists, gaffers, production designers, storyboard artists, grips, wardrobe designers, and makeup artists. Director You are most likely either a writer/director, a director, a director/editor, a director/producer, or some type-A variation of several of these roles who does everything except clean windows. If you’re solely a writer, a camera operator, or producer with no desire to direct, then you need a director to direct the film. A good director doesn’t seek the opinion of everyone before making a
Chapter 2: 24p Preproduction | 24p Pre-Production
57
decision. She directs with gravitas and not a heavy hand. She can manage many people and tasks concurrently but does not micromanage. She arrives on set with passionate ideas and never gives result-oriented direction. She owns and creates the environment where everyone can do what they do best. Like any important role, ask for references, look at previous work, and get to know the person a little before preproduction begins. Writer You have a good idea, a treatment, an outline, or a draft screenplay, but you feel that you need guidance. A writer will help refine your script’s dialog, narrative structure, and format. A script contains the dialog and action that forms your story. It shouldn’t include information intended for the actors (background story), the cinematographer (camera angles), or the production designer (elaborate descriptions of sets or props). As a general rule of thumb, if you cannot show information through action or dialog, don’t include it in the script. Jot it down somewhere else and save it for the preproduction meetings. If you are producing a documentary film, seek the assistance of a writer who specializes in writing proposals for grants. There are many private and corporate foundations and government agencies that give grants for completing or distributing documentary films. The writer can help you research and find the organizations that will most likely fund your project and help you through the application process. Producer The state of your film may be a script in its final draft, it may be in production, or it may be in post, but until it is complete and shown to an audience, it is not a film. What takes your film through these stages intact is producing. Perhaps you can both direct and produce, but it is highly recommended that you focus on directing and find someone you trust to do the producing. Another way to put it is: Your role as the director is to make a film that expresses your vision, and the producer’s role is to ensure that you complete your film. While you have the vision to create a film, you may find that you do not have the chutzpah to raise funds, maintain a budget by saying “no,” keep schedules, arrange for rental equipment, and generally make stuff happen. If this is the case, find someone who can. Look for someone who is connected or is not afraid to make phone calls to complete strangers and ask for donations or a better deal on materials and equipment. A good producer is creative and appreciates the efforts and talents of all cast and crew members. They may not be able to shoot award-winning cinematography, but they appreciate it and demand it. Prior experience with producing films is a must, but you may be able to work with someone who has business experience and the desire to work on an independent feature. It is important for you and the producer to come to a written agreement on the relationship you will have. Define how you will share logistical tasks, set important milestones for the film’s stages of completion, determine the amount of creative input the producer has, and clarify the producer’s salary and compensation should your project get a budget and later be sold and distributed.
58
Chapter 2: 24p Preproduction | 24p Pre-Production
Director of Photography The director of photography (DP) shoots the film and collaborates with the filmmaker and production designer on the film’s composition and visual style. For projects that shoot on film, the DP chooses the stock, lens, and camera. She will also work with the lab. Film projects also require many assistants to operate a film camera. In these situations, the DP is not even behind the camera but is directing a team of assistants who are.
Figure 7: The director and DP relationship
With small 24p cameras, the DP can operate the camera by herself and needs only one or two assistants. Production Designer A production designer creates the visual style for the film. Working closely with the director, he conceptualizes the world in which the film’s characters live as well as giving feedback on frame composition and lighting to the DP and gaffer. The terms “set designer” and “art director” are interchangeable with “production designer.”
Chapter 2: 24p Preproduction | 24p Pre-Production
59
Figure 8: The author on set giving art direction
To design the film’s look, the production designer presents concept sketches to the director, producer, and DP, and he communicates the final ideas with cast and crew. These sketches can be hand-drawn, produced with 3D or storyboard software, or scanned from photographs. He works with the director and a location scout to find locations for the film. His other responsibilities include finding and making props, designing, and building or overseeing the construction of sets. On smaller productions, he will also take on other special tasks such as designing graphics, signage, and marketing materials for the film. Since he is responsible for the film’s overall look, he gives art direction to storyboard artists, costume designers, and makeup artists. Before production he breaks down the script and creates a detailed design budget and a property list (prop list). During production, he oversees dressing the set and produces any last-minute design needs. Costume and Makeup The costume designer works in the art department and dresses the actors. The costume designer will work closely with the production designer on the look for characters. On low-budget productions the costume designer will work with the actors to see what is usable from their own wardrobes and search through thrift and consignment shops to flush out the rest of the wardrobe. Creating relationships with independent clothing stores and boutiques is another way to acquire costumes. Offering credit, promotional opportunities, and thanks to designers or boutique owners is sometimes all you need to get something that will improve the production quality of the film.
60
Chapter 2: 24p Preproduction | 24p Pre-Production
Figure 9: An actor receives some touch ups before shooting starts again
The makeup artist also works in the art department and applies makeup to the actors. They may do special-effects makeup such as disguises or wounds, but mostly their task is to apply makeup that will show the actors in the best way possible given the lighting and cinematography choices. Look for experience and good craft skills in both roles. You want someone who is creative but can also get things done quickly in a pinch. In addition to asking for references, ask to look at a portfolio, which is a collection of production photos and sketches from previous work. On set, both the costume designer and makeup artist manages an area where they can get people dressed and made up. They also help with continuity by keeping track of clothing and makeup changes between scenes. They’re also around to make last-minute or sudden repairs. Gaffer The gaffer is the lighting designer. She arranges all the lights on set and works closely with the DP to ensure that talent and props are lit properly and that the film will have adequate exposure. In preproduction, the gaffer is not doing a lot of work but may be consulted by the DP or director when it comes time to do exposure and look tests for the production. Grip A grip is someone who helps the gaffer move lights and other equipment around. They should have experience with applying gels to lights and windows and with wrangling equipment in and out of a van. They also assist in breaking down and dressing sets when needed. On smaller independent productions, a grip is indispensable. Make sure when you find folks who are willing to do this work that you feed them well and thank them for their time. If you cannot find experienced grips, look for people who you know are dependable and have the strength to do the work.
Chapter 2: 24p Preproduction | 24p Pre-Production
61
Figure 10: Grips do just about everything on a small production, from lighting to giving massages
Continuity The continuity supervisor (also known as the script supervisor) helps maintain believability by noting where items are, what people are wearing, their mood, and anything else that needs to be consistent across sequential shots. They keep a copy of the script, a notebook, and an instant camera to create and store continuity notes.
Figure 11: Continuity helps keep the film accurate and believable
Production Assistants “Production assistant” is in many ways a catchall phrase for someone who helps out in any way on a production. This may mean signing people in as they arrive on set, fetching supplies when
62
Chapter 2: 24p Preproduction | 24p Pre-Production
they run low, or letting the craft services people in the building when it’s time to feed people. Everyone gets their start as a production assistant, or PA as they’re called. Students make great PAs. Sound Recordist One of the most overlooked elements of any production is sound. Early on, find someone who cares about it. This person is attached to the camera with a mixer and ideally a second system for recording sound. Find someone who can work quickly and knows good sound when he hears it, because believe me, you don’t want to work with some who is constantly asking for checks or second opinions.
Figure 12: The sound recordist
Boom Pole Operator A boom pole is a long telescopic pole that holds a directional microphone. The boom operator works with the sound recordist and the DP to ensure that the talent or subjects are effectively recorded. Since this is one of the most thankless jobs, when you find someone who likes doing the job, hold onto him and don’t let him go! Seriously, the job can be very tiring and takes a lot of skill to ensure the talent is recorded and the microphone is out of frame.
Chapter 2: 24p Preproduction | 24p Pre-Production
63
Figure 13: Boom pole operators
It is a good idea to identify a second person, perhaps a grip or assistant, who can watch the boom operator and fill this position on a couple takes so the operator gets a break. Editor Finding a good editor is as important as finding a good director of cinematography, director, or writer. Obviously the editor is the person who takes the footage and assembles the film from dailies to final cut. Again, I suggest looking at the local community college or non-profit education facility. Other good places to look are television stations and post houses. Your relationship with the editor is crucial. It can be the most collaborative endeavor you undertake in producing your film. It’s common practice to consult an editor in preproduction or during the development of your script or treatment. It is most crucial to have them look at the shooting script, the list of shots planned for your film. Often, they can suggest additional takes and setups.
Figure 14: An editor on set reviewing takes and beginning a rough cut
64
Chapter 2: 24p Preproduction | 24p Pre-Production
Sound Editor/Designer The sound editor or designer “sweetens” the soundtrack. Sweetening is the process of taking the originally recorded audio and eliminating unwanted hiss and noise. In addition to sweetening, the sound editor will mix the recorded tracks with music and effects. Lastly, if the audio on the dailies is of poor quality and the performance of the lines is right, the director will ask actors to go into the recording studio and the sound editor will supervise an ADR session. Effects Supervisor An effects supervisor is someone who manages or produces visual effects for a film. Visual effects can include compositing computer-generated imagery (CGI) with the live action, creating motion graphics, or creating sophisticated color correction and effects. During preproduction, they can produce animatics, help art direct story boards, or work with the DP to plan shots that will have effects added. On Les Poupets, a short film by my friend Jean-Paul Bonjour, I acted as both effects supervisor and production designer. During preproduction I created animatics for the title sequences and several visual effects shots. Craft Services Ideally you will hire a catering service to prepare and serve breakfast, lunch, and dinner. If you cannot afford this, ask a friend or relative who you know is a good cook and can cook for a large crowd to help you out. In addition to preparing meals, having snacks and beverages on hand will help everyone function on the set and give them fewer reasons to go off and get something. Remember, you are on a tight schedule and cannot afford to have half your cast and crew off the set. If you don’t pay people, you better be sure to feed them well. A good meal keeps people energetic, focused, and happy. The time spent at a table together also helps build relationships and camaraderie among the cast and crew.
Figure 15: Feeding the cast and crew is an easy way to keep people happy
Chapter 2: 24p Preproduction | 24p Pre-Production
65
Who Reports to Who? After you have assembled the dream team to produce your film, establish the reporting structure. Most productions run smoothly because you develop working relationships with many of the people involved and egos are not the problem. Just remember that you want everyone to want to work with you again. A clear chain of command helps the production run smoothly and gets inexperienced team members oriented to working on set. Following is an org chart for a film with explanations between the relationships that should help you get your team together.
Director
Director of Photography (DP)
First Assistant Director (1st AD)
Production Designer Makeup
Sets Continuity Director
Camera Assistant(s)
Gaffer
Art Director
Sound Wardrobe
Grips
Second Assistant Director (2nd AD)
Props
Production Assistants (PAs)
Boom Operators
Figure 16: An org chart for a film
Casting You can have the best script ever written, a talented crew, and more money than you know what to do with, but if the actors you cast don’t deliver the goods, your film will be no better than a bad cable network’s late-night movie of the week. While this may sound funny, it isn’t. I saw a film at a major festival where more than half the audience immediately walked out as the credits began to roll. The poor filmmaker must have felt as if his stomach fell out. To make matters worse, I could hear people trying to unload tickets to the film’s additional screenings. Above all else, don’t take casting lightly. Sure, people will say, “There are a lot of talented actors out there,” but not only do you want talented actors, you want talented actors who are right for your script. What is right is not necessarily what has screened in the theater inside your head for the past several months. It may be something entirely different, but until you engage with actors in an audition, you won’t know what is right. Where to Find Talent If you live in a major city, finding actors is not difficult given the amount of theater and commercial work that cities create as well as the schools and non-profit filmmaking centers where actors meet with filmmakers. There are additional resources at your disposal: 66
Chapter 2: 24p Preproduction | 24p Pre-Production
• Talent agencies: Talent agencies are a viable option only if you have the budget to pay your actors market wages. • Casting agencies: Casting agencies manage a head shot file of actors and can help you with running casting sessions. This is a lot cheaper than working a with a talent agency. • Internet listings: Craigslist, Yahoo Groups, and independent filmmaking web sites are some. Auditioning An audition ad tells prospective cast members what you’re looking for in a general sense. Some directors have hard ideas about casting and advertise for specific looks, age, body type, and gender. These ideas are justifiable only if the script explicitly calls for a specific look. In a lot of cases casting should be blind to eye and hair color, age, gender, body type, and race. While you want something believable, you don’t want something stale and typecast, so keep an open mind. In addition to a description of the roles, include a synopsis of the film, bios of key personnel such as the director, production timeline, and the film’s format and distribution options. Running Auditions Scheduling auditions and finding a location for them need to be done after you’ve looked at head shots but before the actors line up. When scheduling, give yourself plenty of time to review each actor. An ideal location will offer separate waiting and audition areas. This allows gives the auditioning actors a private audience, and it allows an assistant to orient and manage the actors waiting outside. When running auditions you can give actors a few scenes to perform. The scene can have all the parts in it, or you can reduce it down to only the role they are auditioning for with cues. This has the benefit of allowing you to see how an actor performs with other actors since neither side knows exactly how the other will respond. When directing an audition, have a clear idea about what you want the actor to try out. Be comfortable with making requests of the actor and giving adjustments. While you should never give result-oriented direction, you do want to see how actors will work with you and the direction you might give them later if they are cast. Recording Recording a casting session can be a mixed bag. For starters, don’t do the videotaping yourself. Keep the camera on a tripod and in one place. There is no need to make the actors self-conscious during the audition. While one might argue that this is a test to see how the actors perform in front of a camera, the camera work can be distracting to you and others evaluating the performance. Recording helps when you are torn choosing between two actors, and it is helpful when a producer cannot attend a session and wants to see the auditions. Keep this in mind as the objective, nothing more. Casting Follow up When you make your decisions, notify the actors who got the part first. When they accept, notify everyone else. Thank everyone and keep the rejection simple. Thank them for their interest and tell them you will keep them in mind in the future. Being on good terms helps your reputation, and while an actor might not be right for this part, they could be right for another. In case the Chapter 2: 24p Preproduction | 24p Pre-Production
67
original actor falls through, it is crucial that you have a relationship with the second or third choice. Documentary Casting When you do documentaries, you approach the structure differently than narrative film. Real life doesn’t follow Shakespeare. In verite, the structure unfolds. With a documentary you are often looking to find people to tell their own stories. Ideally, these stories are in line with the story you want to tell. In some ways it is a casting process. It’s about getting the best story told without having to resort to narration. Finding Storytellers not Actors Subjects in a documentary do not act for the camera and do not normally take direction. They need to support the case for your film through their actions because your film has informational needs that should reach your audience. To that end, the subjects you choose need to embody these issues and be able n-camera interviews because you invest far less time in these and you can assess the subject’s potential contribution to your film. Do the pre-interview without a camera. Make it an informal opportunity for you to get to know them and for them to get to know you and your project. Don’t bring a recorder. Take notes. Make them feel at ease. Don’t asked the hard questions in the pre-interview. Make them feel safe and don’t ambush them. It’s not 60 Minutes. Pre-interviews are okay over the phone if you cannot afford to travel. You can get a lot of information from a phone call, and it can help you determine whether or not the subject is worth travelling for the interview. Approaching People Getting people to commit is not always easy. Outreach, networking, and following leads all help and may be preferable to directly contacting subjects. Do your research and find out whether people are approachable. If you approach someone they might tell you no immediately. This is even harder when the content is personal or where you are seen as an outsider in the community you want to investigate. To get the inapproachable people to say yes, approach them through a third party—someone who knows the subject well and who believes as strongly as you do that the subject has a valuable story to tell. This person may be a family member, a close friend, or colleague. Going through someone who the subject trusts and respects can often get you the access you need. In order to create a film with some balance and objectivity, you need the other side, or the antagonist, to appear in your film. The best way to approach this is to offer him a soapbox to stand on to share his opinion. Go with your Intuition That first visit is crucial because your goal is to begin to establish trust and rapport. This goes both ways. What was your first impression of the subject? Not only do they have to trust you to tell their story, but you have to trust them to fulfill your film’s premise and articulate your point of view. From personal experience I can say that casting subjects who want to be in films for the
68
Chapter 2: 24p Preproduction | 24p Pre-Production
sake of being in a film are worse than people who are inarticulate. They are often insincere and may even want to hijack your project. Be wary of people who want to be paid for their involvement. Ethically this is bad because as soon as this happens, they are actors. Remind them that the majority of documentaries rarely make money or break even, regardless of what Michael Moore’s films do, and that the most important thing is to tell their story so others may benefit. Good Relationships Form the Quality of the Film Your film is only as good as the relationships you make in the process of making your film. The relationship you develop with your subjects is very different from the relationship a reporter has with someone interviewed for a news story. The relationship is stronger because you will spend more time with the subject as you tell their story. It’s like the difference between portraiture, an art that evokes emotion, and biography, a format that is merely factual. Be totally honest with people about what you are going to ask them on camera. Ask them, “Is there anything you don’t want to discuss?” Offer the ability to discuss things off camera and see if they’re willing to go on camera later. In certain situations, it’s important to move in steps. Some people will freeze in front of the camera. In these situations, you need to engage the person in a conversation, because this will put them at ease. This is another reason why you need to do pre-interviews, spend time informally with the subject, and build the relationship first before rolling the camera. A question that often comes up is how deep do you “get in bed with folks.” When dealing with subjects whose viewpoint and morality you share, this is pretty simple and not an issue. When you are working with someone whose viewpoint you want for your film but which you oppose, you need to establish a comfort level for both you and the subject that respects your beliefs as well as theirs. Honoring People and their Stories It’s amazing that the world still trusts documentary filmmakers. You want people to be representative, and some times people will react to how they are portrayed in a film. The issue becomes, “Do you show the subject your film or not?” This is a double-edged sword. If the subject feels poorly represented, he will not want to participate in the life of the film after production and editing is complete. The flip side is that showing dailies could make the subject self-conscious and affect their screen presence later or cause them to not want to participate further. Remember, when editing do not take things out of context or inappropriately construct new meanings from interviews. These new constructions often become record because others take your work as factual. Keep things factual. You’ve got to honor people. Don’t beg them for their story, but tell them you’re not alone in wanting their story. Tell them that their story is important. Ask them, “If you don’t tell your story, who will?” Remember that the film might not benefit or flatter the subject, but it will benefit the audience and it will benefit you as an artist. Letting the Camera Roll (being there when it happens) Building any relationship with a subject is a series of negotiations that leads to strong commitments. It’s a weird kind of codependency, because they turn to you tell you their life stories and Chapter 2: 24p Preproduction | 24p Pre-Production
69
you need them to embody your ideas. Because of the time it takes to get to know the subject, you may not bring a camera for weeks or even months as you get to know the subject and establish trust. Once you have that trust, be there with the subject as the stories unfold. For example, a documentary filmmaker and friend of mine, Alex DaSilva, had planned to go to Sundance. The weekend before his flight, many events unfolded around two subjects in his film, Oakland High School students preparing for the Los Angeles Marathon. He decided to cancel his trip so that he could capture these events. Stories are the best when they are freshly told When discussing past events, telling the experience is no longer raw. Don’t ask the hard questions in the pre-interview. You want the subject’s first reaction on camera. For this reason, develop two levels of questions: ones for the pre-interview and others for the on-camera interview. After informal interviews and discussions, decide what situations you will put them in or what life situations you will film. Great interviews come about when the filmmaker puts subjects in situations that get them to tell their story with emotion and action. This may be as simple as taking a subject to the scene where her story originally took place, or following her in her daily activities and filming her in action. Ask yourself, “What situations am I creating or following in this film? What can I have the subject do or react to in order to help tell my story?” What you film and what you need to film confirm the need for determining the scenes or shots you need. A shot list or the first assembly of your film will inform you of what pick-up interviews might be needed to fill in the story.
Location Scouting Besides finding actors and crew, shooting locations are the other big things to find in preproduction. The script and the production designer’s sketches define the qualities for a location, but there are other things to consider as well, such as the needs of the crew. The following is a quick checklist of items to consider when scouting locations: • Acoustics: Is the space conducive to recording great sound? Is there internal noise such as air conditioning, radiators, clanking pipes, or equipment buzz? Is the outside car and pedestrian traffic noisy? Things near the location can also create noise problems—a factory, transit station, or airport, for example. Some of these things can be eliminated by asking the owner if the machinery can be turned off, while others can be masked by using sound blankets. • Available light: If the script calls for internal scenes, make note of window locations, the direction of the sun facing those windows, and the light fixtures. Having the gaffer or director of photography attend is crucial. He or she can point out that the overhead lights or windows might need to be covered with gels, a cellophane-like material, to adjust the color temperature of the light according to the film’s production and artistic needs. • Cost: Some locations might require a fee to shoot there. Locations such as a coffee shop or business may require a fee to cover the business lost to the production. If you’re shooting the next big independent film, the publicity can certainly help, but if you’re doing a short that will play in a few festivals, the location will not reap this benefit. If your production is setup as a nonprofit, the location’s owner may be able to donate the space and get a write-off. See an accountant and find out what’s possible. 70
Chapter 2: 24p Preproduction | 24p Pre-Production
• Power: When shooting in a location for the first time, make sure to ask an authority to look at the fuse box and determine the total available wattage you can use on location. While fuses are easy to replace, you cannot afford the time spent replacing a fuse and the time lost replacing it. • Space: No matter how large or small the production, your crew will need space to lay things out, store equipment, and work away from the set. • Parking: You need space to park a truck with gear and the cars of actors and crew members. If parking is limited, look into public transportation, which can be risky if it is not dependable. Better yet, have people carpool. • Permission: Have all your ducks in a row. A signed release from a property owner will protect you from the owner calling the cops on you and asking you to film somewhere else, or suing you because a studio bought your film and you made money. This only gets worse if you didn’t get errors and omission insurance (although any big studio would expect you to have it or would gladly pay it if they bought your film). Location releases are separate from filming permits because they cover the private space of someone’s house or office. • Rules: Ask the owner or administrator if the location has areas that are off-limits for filming or for crew production areas. Jean-Paul Bonjour’s film Les Poupets was shot at his father’s law office. One attorney was in the middle of a case, so her room was off-limits. Make sure such information is communicated to everyone in the cast and crew. I’d even recommend posting signs on restricted areas. Take Photographs Bring a Polaroid or a digital camera and take as many photographs as needed. These are crucial when deciding upon a location and can be instrumental for the DP, gaffer, sound recordist, or effects supervisor. The earlier this hits folks’ radar, the sooner they can do their own planning, such as where to place lights, microphones, the video engineering room, makeup and wardrobe, prop area, and craft services.
Figure 17: Photo taken with the director during scouting and another during production
Storyboarding and Previsualization Storyboarding and previsualization are planning tools for the filmmaker that are relatively lowrisk ways to explore the film’s visual and narrative qualities. Storyboards are quick but wellcrafted drawings of the beats in each scene. When put together in sequence on a board or edited Chapter 2: 24p Preproduction | 24p Pre-Production
71
with dialog, they explore the film’s pacing, camera movement, and editorial options. Previsualization involves producing animatics of moderate to high fidelity that are edited together and played back in real time. Previsualization is used when the scene calls for elaborate camera moves or visual effects. It helps the director and gives the DP a chance to see how a scene might look and whether it is worth the effort to produce.
Blocking Shots Planning how actors appear in a shot is called camera blocking and should not be confused with storyboarding. Camera blocking is how actors should interact physically with each other, the props, and the set for dramatic effect. In this sense, camera blocking is similar to choreography and should precede storyboarding. Storyboarding is a previsualization tool that explores pictorially how each shot is composed inside a two dimensional frame.
Creating Storyboards Storyboards are planning tools that communicate narrative ideas and concepts visually. They were developed around the time of early animated films, but directors of live-action films soon took notice and adopted the storyboard format to visualize story ideas before committing to production. Storyboards shape a film’s cinematography, staging, and lighting because they allow the director to see how his creative risks will play out. They indicate the camera angle, the position of characters and objects, lighting, and the action that occurs.
Figure 18: Storyboards
72
Chapter 2: 24p Preproduction | 24p Pre-Production
The Relationship Between Director and Storyboard Artist The relationship between the director and the storyboard artist at its best is a close collaboration. In some ways it is like the relationship the director has with the cinematographer during production or the editor during post. The two shape the film according to the director’s storytelling needs and the storyboard artist’s rendering and composition skills. Ideally, the director can draw simple diagrams of blocking and character arrangement and share these with the storyboard artist. Some of cinema’s greatest directors, Alfred Hitchcock, James Cameron, and Wes Anderson drew storyboards for their own films. And storyboard artists such as Joe Johnston, who did concept art for George Lucas, have gone on to direct their own features. Format and Conventions Keep a supply of pencils and felt-tip markers of varying thicknesses. I usually begin by sketching lightly in graphite and then adding tone with charcoal. A can of spray fixative is suggested to prevent the charcoal from smudging. Additional detail or thinner dark accents can be applied with felt markers or india ink and brush. Arrows, camera angles, and camera movement You can use a drawing program such as Illustrator to create storyboard templates and objects quickly. These provide a system for specifying movement, transition, and other visual properties in your storyboards.
Zoom In
Zoom Out
Pan Right
Pan Left
Track Right
Track Left
Tilt Down
Tilt Up
Track Up
Track Down
Dolly Out
Dolly In
Figure 19: Common symbols used in storyboarding movement
Chapter 2: 24p Preproduction | 24p Pre-Production
73
These symbols and storyboard templates are available on the DVD in the Chapter 2>Templates folder.
Alternative Methods for Storyboarding If you prefer not to draw, there are other options for creating storyboards such as using photography and collage techniques and storyboarding software. By simply using a tripod and a digital camera, you can position friends or actors and comp the frames in Photoshop. You can also take print magazines, cut up photos from ads, and comp them together using scissors or the selection tools within Photoshop. Also, you can use Photoshop to overlay arrows to communicate camera movement and action, and apply motion and Gaussian blurs to simulate camera effects such as shutter and focus effects.
Storyboarding software such as Frame Forge Studio Storyboard Artist or Storyboard Quick give you directorial control over 3D scenes. You can place actors, move the camera around, and create printed as well as animated sequences. Animatics or previsualization animations are also created in 3D animation and rendering packages such as Maya, 3D Studio, Lightwave and Electric Image. These animatics are not fully lit and textured but are rough, low-polycount animations that are strictly meant to communicate camera angles and sophisticated choreography of stunts and action.
Production Design Production design gives movies their look and feel. A production designer designs sets and props and gives creative input to the makeup, costume, and visual effects.
Tasks The production designer is busiest during preproduction, as she has to develop the visual style for the film and get the look down and everything done before production can begin. • Creating a prop list: By examining the script, she creates a list of all items that are needed for the film. Usually this is a spreadsheet with prop name, scene, associated character, budget, and whether or not it has been secured through borrowing, rental, or purchase. • Scouting props: With a prop list in hand, the art director should go to antique and thrift shops. Take a digital camera; your photos will show the director options. If something looks perfect and the price is under 10 bucks, buy it. It’s not much money, and it’s better to take it than to return the next day and find it gone. • Design the sets: The production designer should be a core part of the location scouting team. By having photos of a location, it is a lot easier to figure out where props will go on the set, and it will help the production designer dress the set during the shoot. • Preparing for continuity: A finalized prop list and set-dressing instructions are crucial to the person doing the continuity and script supervision. Take photographs of sets when they are dressed and make notes for putting things back in place in the case that scene is shot again. • Preparing for set breakdown: When it’s time to pack up, you will have to put everything in its right place. For example, I was the production designer on Les Poupets. It was shot at a law of-
74
Chapter 2: 24p Preproduction | 24p Pre-Production
fice, and when we were done after a weekend of shooting, I had to remember how each law office and the law library were set up. Fearing the possibility of several angry lawyers on Monday, I took photos (a Polaroid is perfect) of each set location before I began dressing the sets. I also marked boxes and files with sticky notes so I could put them back in exactly the order I found them in.
Figure 20: Scouting props for a film with a Seventies retro theme
Rehearsals Preproduction is about not only the technical preparation for the film but also the preparation the actors and directors do together for the film. This is a time to experiment, get to know everyone, and get to know each other’s working and creative styles. It’s an opportunity to work out ideas, practice complicated choreographed moves, test the script’s dialog, and rewrite as needed. Pedro Almodovar likens his work as a writer/director to a clothing designer and tailor. He writes the script but he “tailors” each character’s lines to the actor so there is a perfect fit between actor and character.
Directorial Notes and Rehearsing If the director did not write the script, it is even more important for her to spend time with it. Careful and thoughtful analysis will make the direction and the performances stronger. This is also a great time for the director to reflect upon the script and develop a cohesive vision and plan for directing the film. By analyzing each scene, the director should develop a clear idea of each
Chapter 2: 24p Preproduction | 24p Pre-Production
75
character’s emotional state and motivation. Drawing from personal and observed experience, the director needs to create a list of options for the performances.
Acquiring Equipment Shooting professional-quality video requires more than just a decent camera. Depending upon your project, you may not need it all, but other items include camera accessories, camera support, sound, lighting, and grip equipment. For documentary production, the bare bones essentials are support and sound with lighting coming in a close third. Narrative projects will benefit from having equipment from each of these categories, but what you need always depends upon the situations in which you shoot.
Camera Choosing a camera is an important preproduction decision. If you choose something cheap you have more money to spread around, but the image quality suffers and you have fewer options in post. If you choose an expensive package you get a stellar image and more options in post, but you have little money for everything else. In picking the camera, you weigh your target distribution method, storytelling needs, and budget. For instance, shooting a documentary that will make the festival circuit and public television is fine with a 24p DV camera. If you are planning to do a film out, then choosing a better camera package such as a DVCPRO50 or DVCPRO100 package is worth considering. Shooting Tests Before the Shoot The best way to determine which camera to use if you don’t own one is to rent or borrow one and do tests. Shoot in situations you are likely to encounter, capture it, and view the results. If you plan to shoot with additional accessories such as an anamorphic adapter, a dolly, jib arm, follow focus gear, or any equipment you have never used before, rent it too and do a small test. Ask yourself how the camera is to use. Is it easy to adjust and comfortable to hold? Does it deliver sufficient image quality? You want to know everything you can about the camera you’re going to use because you don’t want to waste time on the set when it could be spent for another take or setup. You also don’t want a surprise if gear doesn’t work as intended. It is better to find out before production begins. Shooting tests are not only about using the gear but also about the image quality. Capture the material you shot and look at it in your NLE or compositing application. Color-correct it and experiment with effects. If you’re happy with the results, you know you picked a good camera. Use this time to learn the camera’s controls.
76
Chapter 2: 24p Preproduction | 24p Pre-Production
Buying vs. Renting This is a common issue for many first-time independent filmmakers, and I can honestly say that buying or renting are both good ideas depending on the situation. Buying gear is a good idea if you plan to use it a lot, or if you plan to make money from using it for paying gigs or renting it out to other filmmakers. If you make money using the gear, it pays for itself, and the purchase of gear is tax-deductible. However, if you shoot one film and discover that filmmaking is not for you, then you have wasted a lot of money, even you unload it all on eBay. In this scenario, renting or borrowing gear is a good idea since you will learn how
to use the gear, and when you do decide to buy, you have developed preferences that will inform your buying decisions. Another buying strategy is focus on one area of production tools and share gear with filmmakers who have gear that you do not. Another idea is to buy gear that doesn’t become immediately obsolete. High-quality light kits, shotgun microphone packages, a dolly, and a crane are production gear that will not become obsolete once Sony, Panasonic, or Cannon introduce a new model. They are often the gear that novice filmmakers forget because they bought a camera and don’t have anything else.
Equipment Checklist The following tables list production gear with common rental rates. It’s not all necessary, but this gives you a good starting place for finding the things you may need. Don’t forget insurance! Table 1: Camera, Support and Accessories
Item
Description
Daily Rental Rate
Camera
Obviously you need a camera!
$120-150 (DV)
Tripod
This keeps your shots steady.
$25-50
Dolly
Moves the camera smoothly.
Crane/jib
Raises the camera up and down.
Follow focus
Creates shallow depth of field
$35
Matte box and filters
Filters light and controls exposure.
$40
Anamorphic lens
Produces a widescreen image
$50
Chapter 2: 24p Preproduction | 24p Pre-Production
$65 for a doorway dolly $275+ for a Chapman $150
77
Table 2: Additional Camera Accessories
Item Steadicam or shoulder mount Field monitor Waveform monitor Video deck and laptop
Description
Daily Rental Rate
Helps handheld shots from look-
$50 for a low-end model
ing like a drunk friend did them.
$500+ for an elaborate rig
Never trust the LCD screen on the
$50 for standard definition
camera for focus or for framing.
$150+ for a high-definition
Monitors the quality of the video signal.
$125
Ingest and organize dailies on a set
$150-400
Description
Daily Rental Rate
Table 3: Lighting
Item Lighting kit
A good kit contains a few fresnels and an open face light.
HMI
Daylight-balanced lights
C-Stands
Used for mounting lights
Lighting accesories
Soft boxes, paper lanterns, bounce cards, or reflectors
$75+ $50 for Kinos 125+ for more expensive HMIs $10 per stand $35
Expendables
Diffusion gels, gaffer’s tape, etc...
usually pay as you go
Matte box and filters
Filters light and controls exposure.
$40
Anamorphic lens
Produces a widescreen image
$50
Item
Description
Daily Rental Rate
Microphones
Recording audio
Boom pole
Essential for mic placement
$10-15
Mixer
Offers more control over audio
$30-50
Second sound recorder
Record additional tracks
$50-200
Table 4: Sound equipment
$30–50 for a directional mic $25-40 for a laveliere mic
Grip Truck A truck or a van may be used for smaller productions. It may include c-stands, apple boxes, and expendables. Large trucks will require a special driver, whereas a van can be usually be rented by anyone with a valid drivers license. Expect to pay $150 per day for a van and $300 for a large truck.
78
Chapter 2: 24p Preproduction | 24p Pre-Production
What projects have you done in 24p? I’ve shot two shorts using the Panasonic Varicam with the PS+Technik adapter and Zeiss prime lens. My first short was two minutes in length, and the second was nine minutes in length.
Any thoughts on the PS+Technik adapter?
Case Study: Jean-Paul Bonjour Jean Paul is a graduate of NYU’s film program and is a creative lead on the Final Cut Pro team. He directs and produces through Refuge Films, a collective he started with five other Bay Area filmmakers.
What’s your background? My roots are in writing and storytelling. I studied filmmaking at NYU and I did a few 16mm shorts. After graduation, I was a freelance photographer, and worked for a producer who would allow me to work on his Avid at night. Around this time, I learned about Final Cut, bought it, a DV camera, and a Blue and White G3 and produced my first DV short. After publishing an article on Final Cut Pro, Apple found me and hired me to work on the Final Cut team.
What got you interested in 24p? In school I was taught that there were two choices. I’d have to go to Hollywood and spend years in the trenches before getting a chance to make a film. The alternative was to go the independent route, become a starving artist who begs, borrows, and steals my way to produce one short film, and pray that the film gets me enough notice at festivals to do a it few more times before getting noticed. I feel 24p has truly democratized filmmaking because I can write, direct and produce a project and have a lot of control through out the entire process. 24p filmmaking has also enabled artists to be truly indepedent without completing breaking their backs, and it allows filmmakers to take more risks.
Yes, it’s been a great tool for bringing some of the narrative devices used in traditional filmmaking such as narrow depth of field to video production.
What have you learned from working in the format? I’ve learned that having someone dedicated to video engineering on set is crucial since video is less forgiving than film—especially in the bright areas.
What has been your editorial workflow? With the two HD shorts I’ve directed, we have captured in Final Cut using the DVCPRO100 codec and then dubbed down to 24p DV for editing. Since FCP is resolution-agnostic and only cares about frame rate, it has been easy to edit in DV and conform back to HD for finishing.
How do you plan to distribute your projects? Distribution has been interesting. I have shown films in festivals, but I have also sold one film to HDnet. Filmmakers are also selling films to the Discovery Channel and ESPN because there is very little HD content out there since the format is so new.
What is independent filmmaking? It’s like alternative music, and like alternative music, it’s sadly now more a market segment. While independent means outside the studio system, independent films have budgets that can be a few million or a several thousand dollars. It’s no longer a clear-cut category.
Parting advice? Think before you shoot.
Chapter 2: 24p Preproduction | 24p Pre-Production
79
This Page Intentionally Left Blank
24p Cinematography Creating the “film look” with video is more than just flipping a switch. It requires learning the established conventions from over a hundred years of filmmaking.
Chapter 3: Cinematography | 24p Cinematography
81
24p and Cinematography Cinematography is the art and craft of shooting motion pictures whether they are film- or digitalbased. Before 24p and digital video, the person behind a film camera was called a cinematographer and the person behind the video camera was often called a “shooter.” The snobbery and elitism film students exhibit over their broadcast peers, where the former refers to the latter as “vidiots,” is quickly becoming a thing of the past as production and post go digital. 24p brings together many considerations from both the film and video establishment, and this chapter addresses the 24p filmmaking process from both these sides where appropriate.
The Purpose of the Shot A film normally has three acts, but a film can be broken down even further into sequences, scenes, and shots. A sequence is made up of scenes, takes place in one or many locations, and stands on its own as a narrative unit. A scene is one or more shots in a single location. A shot is a continuous view captured with a single camera without interruption.
act sequence scene shot
Figure 1: Shot, scene, sequence, act, film
Shots are the narrative, atomic-level building blocks of a film. Each shot uses a visual language with a grammar that developed from not only the last hundred years of film, but also from centuries of pictorial art, theater, and music. The beginning filmmaker must make himself familiar with this language and grammar through experimentation. This chapter will cover the basics through discussion and demonstration as well as provide information on video engineering, and general tips for narrative and documentary cinematography as it relates to the 24p format. The more a shot carries its weight into the editing room, the more likely the film will succeed in representing the filmmaker’s vision. So it is the cinematographer’s artistic responsibility to create shots that are compelling and become more so when put together by the editor. A compelling shot is not just a shot with a good composition. The movement within the shot as well as movement created by the camera also contribute significantly to a shot’s worth in the editing room.
What makes a Shot Compelling The following elements make a shot compelling: motivational purpose, the right information, interesting composition, a variety of counter-perspectives, sound that supports the action, and continuity with other shots in the sequence. What makes a shot compelling is a subjective call, and it isn’t as simple as checking items off a list. That said, if you do take the time to thoughtfully
82
Chapter 3: Cinematography | The Purpose of the Shot
consider these qualities before and while you compose your shots, you are following the right path. Motivational Purpose Motivation should not be guided by the chance to use special equipment or to emulate shooting styles currently in fashion. Shots have motivation when there is reason to cut to or away from them when editing. Motivation is usually driven by sound or movement cues in the shot. For example, an actor turns his down to look at a box of doughnuts. In the next cut, he is eating one as if it was his last.
Shot 1
Shot 2
Figure 2: From one shot to the next, it’s clear what the character wants to do
The Right Information Each shot should contain the right information. This information is normally called out in the script through visual or sound cues. They can be obvious or subtle, literal or abstract. A shot with information either conveys information alone or when cut together with another shot. For example a shot has two actors discussing an enticing cup of tea. In the second shot, the standing actor points down to his cup and the other actor looks at the tea.
What kind of tea?
Not just any kind of tea—
a special kind of tea.
Figure 3: The right information keeps the audience thinking across shots
Interesting Composition Composition is much more than arranging elements within a shot according to artistic conventions. It is also about establishing mood, providing information, and keeping the audience interested.
Chapter 3: Cinematography | The Purpose of the Shot
83
A Variety of Counter Perspectives Shots should show the perspectives of many voices. This includes shots that show the characters from subjective and objective angles. Going from one shot to the next presents the audience with the opportunity to learn something new. When a sequence of shots has motivational purpose and the right information, changing the camera angle facilitates editing the shots together. While changing the angle helps keep a piece fresh, there are two important rules to follow for preserving continuity in the viewer’s mind: the 180-degree rule and the 30-degree rule. The 180-degree rule simply means that consecutive shot angles should not exceed 180 degrees. When consecutive camera angles are within this limit, these shots will match, since characters A and B appear in the same location in the frame and do not appear to jump in the frame. There are a few exceptions to this rule that are listed after the following exercise.
The line of action, or 180° rule states that after a line of action has been designated for a given shot, the line of action should not be crossed. Camera 3
Line of Action
Mr. Blue Camera 1
Mr. Green 15° too small Camera 2
30° 60°
Cutting between camera one and two makes sense because one character is seen from the other’s point of view.
Crossing the line of action and shooting Mr. Blue from camera three confuses the viewer because both characters occupy the same space in the frame.
The 30° rule states that any change in angle should be equal or greater to 30°. Any less and the next shot appears like a mistake.
Figure 4: The 180° and the 30° rules dictate where the camera is between shots
The 30-degree rule requires consecutive shot angles to be at least 30 degrees apart. When shots that exist less than 30 degrees apart are edited together, they often don’t provide enough different information to the audience. When to break the 180 degree rule: • When the camera is moving in the shot. For example, a shot around several trees. • When the characters move in the shot. For example, a shot made through a crowded city transit terminal.
84
Chapter 3: Cinematography | The Purpose of the Shot
• When the audience has a fixed point of view such as through a doorway. The camera angles show the viewpoint of watching what comes through the door as well as the viewpoint of entering through the door. • When the shot size is very different from one shot to the next. For example, following a procession down a long corridor and shooting from both ends of it. Consistency Across Frames in the Shoot Continuity is key. Any inconsistency across shots in the same scene distracts the audience because you have suspended their belief in the film’s reality. This means leaving the glass on the table completely full and in the same place when shooting shots that occur seconds apart. Another example is a scene where one shot features actor A sitting with her hand to her head and actor B walking in front of her. When the next shot shows actor B sitting down next to her, her hand should still be on her head. With films, continuity is like insurance. If you don’t have someone checking for it during production, the editor will curse you because the state of reality is not maintained, and the producer will shoot you as he fumes over the cost of reshooting.
Shot 1
Shot 2
Figure 5: In these two adjacent shots, what’s different? Answer: Her left hand is not behind her left ear
Shot Types Any shot can be put into one of three categories: simple, complex, and developing. Table 4-1 lists possible movements for each shot type. A simple shot is one where the camera does not move and all action occurs within the frame. A complex shot is one where the camera’s support (usually a tripod) is kept in the same position but the camera is directed somewhere else by rotating the camera sideways (panning) or from top to bottom (tilting). A complex shot can also be a zoom in or a zoom out. A developing shot is one where the camera’s base and direction move simultaneously. Table 1: Shot categorization by movement
Type of Shot
Subject
Simple
×
Lens
Pan & Tilt
Complex
×
×
×
Developing
×
×
×
Chapter 3: Cinematography | Focus
Support
×
85
Focus If creating a pleasing composition is the art of cinematography, managing sharp focus is one of its crafts. A soft-focused image obscures information in a shot, looks like an amateur did the shooting, and annoys most viewers since they are used to having everything in focus. That said, having everything in sharp focus would be mundane, and most viewers would agree that having either the foreground in focus and the background soft or vice versa helps them understand the meaning in a shot.
Rack Focus Racking focus is a narrative film technique where the focus is shifted from one subject to another within the same frame. This is seen a lot in over-the-camera dialog shots. For example, while a one man in the foreground smiles to himself, the camera shifts focus from him to another man plotting against him.
Figure 6: An example of racking focus between two characters
Measuring Focus The most common ways to focus are to either use the autofocus or set it manually and eyeball it in the viewfinder. While this is fine for sit-down interviews, it doesn’t always work in narrative contexts because focus is harder to maintain when there is a lot movement in the scene, when the camera is required to move, and when there is both movement in the scene and the camera moves. In these situations, you have two alternatives: use a follow focus gear or use the camera’s focus zoom read out and a measuring tape. Using a Follow Focus Gear The old-school way of measuring and maintaining focus is to use a measuring tape and attach a markable focus ring, also called a follow focus gear, to the lens. With a setup such as this, it often requires an assistant, known as the focus puller to the manage the job of maintaining focus while the DP will then be concerned with framing and shouting focus if the focus puller is not doing his job. The workflow steps include: 1. Have the actor take their first position in the shot, acquire focus, and mark it on the follow focus
gear. 2. Have the actor go to the next position in the shot, acquire focus, and mark it on the follow focus
gear. 3. Shoot the take and move the focus gear from one mark to the next while maintaining focus
throughout the shot. 86
Chapter 3: Cinematography | Focus
Using the Focus Readout In the past, the steps a DP and the camera assistants followed began with measuring the distance between the subject and the film plane and adjusting the focal length of the lens to keep the subject in focus. If the plan is to have the subject move, an assistant measures the distances, notes them, and makes marks on the barrel indicating where to start and end the focus pull. Today, the barrel of a DV camera’s lens is not marked for zoom, and so the DP and the assistant use the focus readout on the viewfinder and measuring tape to do the same job. In the case where the camera does not list the focus readout in percentages but in a unit of measurement, the team will use a chart that maps the focus units to real-world units. The workflow is similar but slightly different: 1. Have the actor take their first position in the shot. Acquire focus and measure from the optical
center of the lens to the talent. Note the zoom setting on the lens and compare this to the tape measurement. 2. Have the actor go to the next position in the shot. Focus, measure, and compare the zoom setting
to the tape measurement again. 3. Shoot the take and move the focus between the first setting and the next while maintaining focus
throughout the shot.
Measure from the optical center of the lens
Mark starting and stop points on the focus ring
Use a dry erase marker and mark the focus required for the start and stop points. During the take, simply move the focus ring between these two marks. Figure 7: Marking focus on the focus gear
Focal Length Focal length is the distance from the exposure plane (imaging sensor) to the optical center of a lens when it is focused at infinity. Focal length is used to measure a lens in terms of its angle of view. In general, a shorter focal length gives a wider angle of view and a longer focal length gives a narrower angle of view.
Chapter 3: Cinematography | Focal Length
87
CCD
optical center
telephoto
focal length
normal wide angle Figure 8: Telephoto, normal, and wide angle lens
Normal Lens In traditional 35mm photography, a 35mm to 50mm lens is considered a normal lens. It is normal because this angle of view has a low amount of distortion and most closely resembles human visual perception. On a one-third-inch CCD camcorder such as the DVX100 or Canon XL2, this translates to somewhere between 5 and 6mm. Telephoto A 35mm format telephoto lens is anything above 60mm. For a one-third-inch CCD camcorder, this translates to 8.3mm and above. A telephoto lens has the effect of compressing space. While this can be effective for making elements that are far apart appear closer together for dramatic effect, it distorts the distance between elements. For example, if you shot a traffic jam using this effect, the cars seem closer together than they actually are. Wide-Angle Lens A 35mm format wide-angle lens is anything below 35mm. For a one-third-inch CCD camcorder, this is anything below 5mm. A wide-angle lens, as its name implies, captures a wide angle of view. However, when a lens is extremely wide, such as a fish-eye lens, it also begins to distort an image horizontally.
Depth of Field Depth of field (DOF) is the area in front of the camera where elements look sharp and in focus. Let’s assume you’re shooting a scene and the subject is nine feet in front of you. When you focus on the subject, the depth of field could range from eight to 11 feet. Anything within this area will be in focus, and anything outside of it will be soft and out of focus. Realistically, only one
88
Chapter 3: Cinematography | Focal Length
infinitely thin plane is truly in sharp focus at any one time, but depth of field is much deeper than this. The thin plane in focus is about a third of the way into the entire depth of field.
focus
1/3
bring theout subject closer to the camera. of focus
2/3
depth of field
out of focus
Figure 9: Depth of field is the area in front of the camera that is in focus
When shooting extreme close-ups in macro mode, the focus plane is closer to the middle of the entire depth of field. DOF Decreases as Focal Length Increases Depth of field is inversely proportional to focal length; that is, depth of field decreases as focal length increases. This means that a telephoto lens has less depth of field than a normal lens. You can use this property of a telephoto lens to your advantage when shooting with a zoom lens. First zoom in all the way into a small area on the subject like the eyes. Focus the lens so that the eyes are sharp and then zoom out to the desired framing. Since a zoom lens maintains the same focal plane regardless of zoom, you are guaranteed sharp focus.
Chapter 3: Cinematography | Focal Length
89
focus The wider the angle of view, the greater the depth of field.
out of focus
depth of field
out of focus
The narrower the angle of view, the shallower the depth of field.
bring the subject to the camera. outcloser of focus
depth of field
out of focus
Figure 10: Depth of fileld decreases as focal length increases
Conversely, depth of field increases as focal lengths decrease. This means a wide-angle lens has more depth of field than a telephoto or normal lens. In run-and-gun situations it is best to set focus quickly and then go wide, since depth of field is deeper at short (wide) focal lengths. DOF Increases as Aperture Decreases Depth of field is also inversely proportional to aperture, and so depth of field increases as the aperture closes. This means at f/8 there is more depth of field than at f/2. When you squint (close) your eyes to focus on an eye chart, you are essentially doing the same thing.
90
Chapter 3: Cinematography | Focal Length
focus smaller aperture yields a greater depth of field
out of focus
depth of field
out of focus
larger aperture yields a shallower depth of field
of focus bring out the subject closer to the camera.
depth of field
out of focus
Figure 11: Depth of fileld increases as the aperture becomes smaller and decreases as it becomes larger
DOF and the Camera-to-Subject Distance Depth of field increases as the subject moves farther away from the camera and decreases the closer he is to the camera. To get more depth of field, move the camera farther from the subject or move the subject farther from the camera. To get less depth of field, move the camera closer to the subject or bring the subject closer to the camera.
Chapter 3: Cinematography | Focal Length
91
focus
The farther the subject is from the camera, the greater the depth of field. out of focus
depth of field
The closer the subject is to the camera, the shallower the depth of field.
out of focus
depth of field
out of focus
Figure 12: Depth of fileld increases as subject moves farther from the camera
Movement Excluding the movement of actors, movement that changes the view within a shot is caused either by camera movement or by changing the focal length of the lens (zooming in or out). The script or storyboards will all ask for specific camera movements for each shot. In some cases the director and editor will ask for different movements for each shot for flexibility in the editing room. The following sections explain each of the basic forms of camera movement and what equipment is used to create it.
Pan and Tilt Panning involves rotating the camera to the left or right on the y axis. This is best done by rotating the camera using a pan-tilt head attached to a tripod. While panning can be done while holding the camera, it’s not as smooth and is best for short pans. Tripod pan-and-tilt heads come in three varieties: fluid, fluid-effect, and geared. Fluid head gives the smoothest pans because the resistance created by pushing oil through the internal mechanisms dampens jerky movements and softens horizontal and vertical pans. A fluid-effect tripod softens movement with two internal greased plates arranged so that they work against each other to dampen vertical and horizontal rotations. A fluid-effect head is not as smooth as a pure fluid head, but they can do the job and are a lot less expensive. A friction head does not offer any dampening and is really only good for locked-down shots where no panning or zooming is planned.
92
Chapter 3: Cinematography | Movement
out of focus
pan
tilt Figure 13: Pan and tilt
24p Pan and Tilt Guidelines There are several guidelines to remember when shooting 24p and panning. Shooting a pan in 24p video has many similarities with shooting a pan in film because of the frame rate that they share, 24fps. A pan that is done too quickly causes judder, noticeably long movements for elements within a frame. This judder creates a strobe effect which causes the perception that 24fps is too slow (which many videographers feel it is). To avoid judder, you simply need to: • Limit the speed of your pans. Table 4-x lists several panning speeds for shooting 24p with a miniDV camcorder such as the DVX100A or XL2. • Turn off Optical Image Stabilization (OIS) when panning with a tripod. It will fight you the entire length of the pan and create more judder. If you want a fast pan, consider cutting between the two shots as it can often give you the same visual effect, but remember to follow the 30-degree rule. Table 2: Recommended Panning Speeds for a given focal length
Pan Angle
4.5 mm
11 mm
40 mm
45°
5 seconds
12.5 seconds
30 seconds
60°
7.5 seconds
18.75 seconds
45 seconds
90°
10 seconds
25 seconds
90 seconds
Titling the camera refers to rotating the camera upwards or downwards along the horizontal axis. A tilt can be accomplished handheld or by rotating the camera around the horizontal axis on a tripod. Panning and tilting the camera are often done to expand what the viewer sees. The information shown to the viewer can be pictorial such as a landscape, or it can be done in place of a cut between two actors to show connection in addition to reaction.
Chapter 3: Cinematography | Movement
93
Dolly and Track To dolly, the camera moves towards or away from the subject. This type of move is either done handheld or with a platform dolly. A dolly is a more natural and often better alternative to a zoom because the human eye does not zoom. When someone wants a closer look, they simply move closer to the subject. To track, the camera moves left or right of the subject along the horizontal axis. This is accomplished by walking side by side as a handheld shot or by positioning a platform dolly parallel to the subject and facing the lens towards the subject. Tracking is also referred to as trucking.
Dollying is moving the camera closer to the subject along a perpendicular axis.
Trucking is moving the camera relative to the subject along a parallel axis.
Figure 14: Dolly and track camera movements
94
Chapter 3: Cinematography | Movement
Dolly and tracking shots are accomplished with the same equipment: dollies. Dollies come in all shapes and sizes. The most guerrilla approach is to use an old wheelchair and sandbags and twine to stabilize the camera to the chair. This will create very smooth shots, but your shots are limited to how well you can attach the camera to the chair. Filmmakers with access to basic tools tend to create a skateboard dolly from skateboard wheels positioned at a 45-degree angle to one another under a long piece of plywood. The camera and tripod sit on top of the platform, and a handle pulls or pushes the platform as the skateboard glides over PVC pipe. If you’re interested in building a dolly like this yourself, go to a search engine, such as Google and search for, “build skateboard dolly” Professional dollies offer smoother motion, the ability to support much larger cameras, and the ability to pedestal (go up or down). A Chapman Super Pee-wee dolly is a versatile dolly. The Chapman wheels run on both straight and curved track. The dolly can be brought to a smooth stop. The camera is lowered and raised with a hydraulic arm and can be set to move up and down within a preset range. A Chapman dolly requires at least two people to operate not including the cinematographer.
Pee Wee dolly with pneumatic wheels
Spider dolly with skate board wheels
Figure 15: Chapman dollies
Since all action in a movie is not stationary, a dolly shot is best when the shot must follow the characters, for example, following a sidewalk conversation or capturing a long walk down a hallway. Like a pan, a dolly shot can be also used to expand the space shown to the audience.
Chapter 3: Cinematography | Movement
95
Pedestal To pedestal the camera simply means to raise the camera up or down. This can done handheld, by raising the center column on a tripod, by using an expensive dolly, or by using a dedicated pedestal. Pedestal shots are great for showing head-to-toe shots or when there is information to be conveyed by shooting low-high or high-low.
As the camera pedestals up, the subject exits the bottom of the frame.
Figure 16: Pedestalling up
Zoom Zoom is altering the lens’s focal length so the subject appears to be close or far. This should be avoided unless a dolly move is not possible or the zoom is required for dramatic effect, such as the “Powers of Ten” shot made popular by the designers Charles and Ray Eames. A popular shooting technique is to simultaneosly dolly in while zooming out or dolly out while zooming in. This complex (and often unsettling) shot maintains the relative size of subjects while changing the perspective. It’s a dramatic effect often seen when focusing all attenion on one character.
96
Chapter 3: Cinematography | Movement
When the camera simultaneously dollies in and zooms out, the relative size of objects remain the same, but the perspective changes dramatically.
Figure 17: Simultaneously zooming out while dollying in
Arc Arc means moving the camera in a circular arc around the subject. A rough arc can be done handheld or by placing a tripod on a dolly with circular or flexible dolly track. An arc can be used several ways. A common application is to show a couple dancing. The camera arcs around them to show their reactions to one another as well as the surrounding space. A slow arc movement around a single character makes the audience view the character as an object. An arc around a close-up of a character’s face can be a device to show a change in mood, outlook, or even age if effects are employed.
When the camera arcs around a subject, it’s like the end of a pendulum swinging around it.
character. Figure 18: Camera movement with an arc
Chapter 3: Cinematography | Movement
97
Crane Shot This movement involves the use of a long mechanical arm, called a jib or boom arm, attached to a tripod that can pivot up and down. The camera is attached to the end of the arm, and often the end of the arm has a fluid pan-and-tilt head for the camera. This allows for greater flexibility when moving the jib arm. On larger sets, an industrial crane with a long extension arm is used for very dramatic shots.
A crane movement is like a pedestal movement, but differs in that the camera pivots to remain focused on the subject, and the movement may include dollying or trucking.
character. Figure 19: Camera movement with a crane
Since no one gets around in the world by sitting on a camera crane, the shot it produces is not naturalistic because it is not humanly possible. The crane shot is effective when trying to show a godlike or epic view of the scene. A crane shot can be used as an establishing shot or as a way to end a scene or a movie as a character walks off towards the horizon.
Hand-Held Camera Work While a handheld shot gives the impression of intimacy and first-person point of view, an entire film shot handheld is tiresome to the audience and might even cause nausea. In addition, handheld camera work with a DV camcorder does not fare as well as when blown up to film or fed through a compression workflow for web or DVD video. This is because stable shots contain more detail and have fewer motion artifacts. This is not meant to be discouraging, but know that it takes a lot of practice and skill to be good at shooting handheld with a camera.
98
Chapter 3: Cinematography | Movement
Keep camera and elbows close to your body
Figure 20: Hand held ergonomics prevents fatigue and keeps shots steady
Camera Stabilizers The steadicam is the original stabilization system for handheld cinematography. It offers a smoother shot by attaching the camera mount to the body. The camera rests on an arm with fluid movement, and the arm connects to a vest the camera operator wears to reduce the burden of carrying the camera. The operator can swing the arm around and lower and raise it to take a shot. Gyroscopes in the arm keep the camera balanced and produce smooth shots. While there is some noticeable movement in a steadicam shot, it, like many other handheld shots, connotes a personal point of view shot since there is humanlike motion in the shot it produces. When following a single character or exploring a dark space, the steadicam does an effective job of putting the viewer in the film. For DV cameras such as the DVX100 or XL2, less-expensive camera stabilizers are available in the $200 to $500 range. They work by balancing the weight of the camera with weights and positioning the camera forward or backward on the mounting plate. While these units are much cheaper to rent or buy, they do cause a lot more strain on your arms and take a little more practice at keeping the shots smooth. Manufacturers such as Varizoom and Glidecam are now making vests and arms to work with these units to make them more like steadicams without the high cost.
Chapter 3: Cinematography | Movement
99
Keep the arm close and at a 90 degree angle
By keeping the stabilizer close to your body, you can hold it longer.
Figure 21: The VariZoom camera stabilizer
Car mounts Because so many films, documentaries, and commercials involve car scenes, there are many car mounting systems. Directors often want to shoot over the car to show the road in front or behind, or to shoot on the side of the car to show the driver or passenger react to other characters in the car. Most mounting solutions are very expensive to rent or buy as they are intended to be rock-solid safe for the weight of film or heavy HD cameras. In the past year, several companies have begun to make more-affordable mounts for DV-sized camcorders. A few years ago, I hobbled together a car mount using a mounting kit intended for SLR cameras. I bought a morerobust base for the kit and bought a spirit level to put in the hot shoe to ensure that the camera remains level.
Figure 22: A lightweight camera like the DVX100 can be mounted to almost any side of a car
The second car mount kit I bought is a GTMount™ of Vancouver, British Columbia (www.gtmounts.com). It’s an adjustable mount that can be mounted on all sides of a car and has a set of 100
Chapter 3: Cinematography | Movement
ratchets and straps for securing the mount to a car. I found this kit to be a great, inexpensive kit for lightweight DV camcorders such as the DVX100A.
Shot Sizes The three most common shot sizes are long, medium, and close-ups. There are four levels of a long shot, only one true medium shot, and four levels of a close-up. These designations are a handy reference when describing and setting up shots. The size range facilitates scene continuity for a single location because the shots are sized variations of the same overall wide shot. The audience recognizes elements from the wide shot that are not bigger in a medium or close-up shot, and continuity is maintained.
Long Shots Long shots are broken down by extreme long shot (ELS), very long shot (VLS), long shot (LS), and medium long shot (MLS). In an ELS, the subject is dwarfed by the frame. The subject’s height is 16 percent or less of the frame’s height and as a result is hard to recognize. The extreme long shot is used to show a location from a distant vantage point. It’s mostly used for opening and closing establishing shots.
Extreme long shot (XLS or ELS)
Long shot (LS)
Very long shot (VLS)
Medium long shot (MLS)
Figure 23: Various types of a long shot
Long shots are effective at including all the action in a scene. They make following movement and action easier because everything fits into the frame. Long shots also serve as an important pacing tool when the viewer needs a pause before being shown more detail in a close-up. The obvious downsides to long shots are: they don’t show details that would be obvious in medium or close-up shots, and they can’t show the mood or expression of individual characters.
Chapter 3: Cinematography | Shot Sizes
101
Medium Shot A medium shot frames a character from the waist up. It shouldn’t be confused with a three-quarters shot which frames the character from the knees up. Medium shots are the most frequently used shots in production. They have little to no distortion and portray elements within a scene at normal focal lengths similar to human vision. A medium shot usually contains all the action within a shot, and it is best if it is intended to be a subset of the establishing or wide shot so the two shots create a smooth edit.
Medium shot (MS)
Figure 24: The medium shot
While the medium shot is not broken into additional sizes like the long or close-up shots, there are several staging options that apply to all medium shots: single, double, over-the-shoulder, and group shots. Singles are shots with a single character. Doubles are shots with two characters side by side—for instance, two characters sitting side by side on a train. Over-the-shoulder shots are two setups where a camera is positioned behind each actor and the shot shows one actor’s shoulder and the face of the other actor in the foreground. Since over-the-shoulder shots are all about showing reaction, they transition well to close-ups where an actor’s counterreaction is even more dramatic.
Close up A close-up is what makes film and television different from theater. A close-up is a small portion of the action in a shot blown up to fill the entire screen. This has the effect of showing a lot more information, placing the viewer closer to the subjects, highlighting important information, and excluding information not relevant to the action shown.
102
Chapter 3: Cinematography | Shot Sizes
Medium closeup (MCU)
Closeup (CU)
Big Closeup (BCU)
Extreme Closeup (ECU or XCU)
Figure 25: Medium close up, close up, big close up and extreme close up
Close-ups are harder to shoot than medium or long shots because maintaining sharpness and crucial action within the frame is not easy with fidgety actors. Close-ups are to be used sparingly. In many cases, the most important information can be shown in a medium shot. Close-ups show emotion powerfully. For instance, you wouldn’t want to cut to a extra-long shot of a couple kissing; that would be better as a close-up. By the same token, closeups are much more intimate than medium and long shots and can be uncomfortable for viewers when there is graphic violence or the subjects aren’t pretty as is often the case with new casts and documentary films. Extreme close-ups (XCUs) focus on the eyes, mouth, or hands to isolate minute actions for the viewer. Think about a finger on a trigger, two hands shaking, or a pair of eyes squinting. Very close-ups (VCUs) show most of the face but cropped so the eye can easily reconstitute the boundaries offscreen using the imagination. This is as intimate as it gets and really shows the character’s reaction. Big close-ups (BCUs) are really tight on the face and show the entire head with just a slight amount of neck and shoulders. Big close-ups and close-ups are the shots most used for newscasters and “talking head” presentations. While effective for getting a quote and making a subject appear authoritative, things get boring rather quickly with talking heads because there is little action and the tight presentation is no longer for showing intimacy and emotional reactions.
Chapter 3: Cinematography | Shot Sizes
103
Shot 1
Shot 2
Figure 26: Closeups carry different emotional weight; you can see that the baby is upset from far away!
Camera Angles Shot angles are the levels at which a subject is viewed: • An eye-level shot matches the eye level of the subject in the shot. Typically this means having the lens somewhere between five to six feet above the ground. The eye-level or straight-on shot is the most frequently occurring shot in film because it is at the perspective from which most people see things in the world. • A low-angle shot is taken below eye level and points up towards the subject. A low angle gives the subject prominence and strength. It conveys admiration. At extreme low angles, the subject appears gigantic and imposing. • A high-angle shot is taken above eye level and points down at the action. This has the effect of making the subject appear small and subordinate. At very severe high angles, the subject appears at the mercy of the viewer. • An extreme high angle or bird’s-eye view shot is positioned directly above the subject at a 90degree angle. This is great for aerial shots of landscapes or urban environments and conveys a journey or path taken. • A dutch- or oblique-angle shot has the camera tilted to create an image on the diagonal axis. It immediately conveys that something in the shot is off-kilter and in need of attention.
104
Chapter 3: Cinematography | Shot Sizes
Low angle
High angle
Eye level
Extreme high angle
Dutch angle
Figure 27: Camera angles
Two terms that are interchanged incorrectly are framing and composition. Framing is determining what elements are in the picture, and composition is determining how the elements are positioned relative to one another in it. In simpler terms, framing can be seen as the size of the shot, and composition is the arrangement within a shot. Positioning Subjects in Front of the Camera The actor’s position relative to the camera indicates how the filmmaker wants to present her to the viewer. When an actor looks straight into the lens, this is a subjective position because there is a connection between the character and the audience. When the actor looks offscreen and is seen in a three-quarter or full profile, this is an objective position because the actor is not addressing the audience and the audience can view the actor in relation to her environment or other actors in the scene.
Chapter 3: Cinematography | Shot Sizes
105
Closed and Opened Framing Closed framing means that the subject bleeds to the edge of a frame. Opened framing leaves room between the subject and the edges of the frame. Both techniques can have several effects on the viewer’s interpretation of the shot. When cropping a path on one edge of the frame (closed framing), the path leads the eye from outside the frame into the frame. Likewise, when not cropping the pathway at all, the eye is not invited into the frame. Nose, Head, Pointing, and Lead Room Nose room is also referred to as “talk space.” It suggests having ample space between the subject’s nose and the edge of the screen to which the subject is looking. When the subject has her nose against the edge of a frame, it looks as though she is being blocked by something offscreen. If this is in fact the case, go ahead and do the shot that way, but if it is not, know that this distracts the viewer.
Open framing on either side
Closed framing in one corner
Figure 28: Closed and open framing
Shot 1
negative head room
Shot 2
little head room
plenty of nose room ample nose room
Figure 29: Nose and head room room
Headroom means having a comfortable amount of space above the subject’s head. Having no headroom creates tension because the eye is drawn to the area where the head touches the top frame edge. It’s as if the actor’s head is fastened to the top of the frame. Too much headroom makes the frame appear top-heavy. A comfortable amount of headroom is about 15–20 percent of space between the head and the edge of the frame. In a close-up shot, cropping the actor’s head by a few inches is acceptable because the viewer intuitively draws the rest of the person’s head offscreen. In a medium or wide shot, cropping the head appears to be a mistake.
106
Chapter 3: Cinematography | Shot Sizes
Pointing room is very similar to nose room as you want to maintain a lot of space between the tip of the finger and the element that is offscreen. Whatever you do, you should not crop the person’s finger out of the frame unless your goal is to create a sense of dismemberment.
Given the direction the character is moving, there should be more room in front of the him than behind him. Figure 30: Pointing and lead room
Lead room is also like nose room and pointing room, but it applies to elements in motion. When an actor walks across the frame and the camera is following him, it should anticipate the actor’s movement and leave room between the actor and the direction the actor is walking. Eye Line and Reference Points Eye line is an imaginary line between the character’s eye and the direction or reference point they are looking at. While not physical in nature, an eye line influences the frame’s composition because the audience will focus on the actor’s eyes and will subconsciously create the eye line between the actor and the reference point. The reference point may or may not be on-screen, and when an actor looks offscreen and an edit occurs, it helps if the reference point appears in the location where the actor was looking.
Chapter 3: Cinematography | Shot Sizes
107
Figure 31: Eyeline
The Rule of Thirds The rule of thirds is a technique where the frame is divided into three vertical and horizontal sections. The eye line is placed on the first horizontal divider from the top and a subject is centered on either of the two divider lines. When an element, say a tower, is in the center of the frame, there is little interest and meaning in the picture because of the inherent symmetry. Most meaning is created by juxtaposition and contrast—for example, high and low, near and far, good and evil, hot and cold. If the tower is placed on a third, it is more visually interesting, because the eye appreciates counterbalance.
The character is framed along the thirds. Notice how his arm is jointed along the two of the guides.
Figure 32: The rule of thirds
Avoid Ambiguous Framing It is important to avoid ambiguous framing choices. When framing an element, avoid the area that is close to the center of the frame because the viewer will question whether or not your
108
Chapter 3: Cinematography | Shot Sizes
choice was to put the element in the center or not. It appears to be a mistake and distracts the viewer from whatever you are trying to convey in your film.
Camera Craft The job of the cinematographer is not just to make the audience say, “what amazing cinematography.” If the audience talks only about the cinematography, the filmmakers have failed. The cinematography helps tell the story, it does not distract from it. Within this endeavor, however, it should find every opportunity to delight, surprise, and keep the audience engaged. Success is achieved when the cinematographer has developed her skills, when her workflow is smooth and facilitates rather than hinders her craft, and when she follows (and occasionally breaks for dramatic effect) the formal rules that are grounded in narrative and cinematic guidelines. Quick Zooms and Whiplash Panning Fast, unmotivated zooms and indiscriminate whiplash pans are a clear sign of an amateur filmmaker. First of all, you never see this in professional video and cinematography. Filmmakers almost always shoot with a fixed-length lens. If they want the camera to move more closely to the subject, they move the camera and not the lens because it looks more natural—as if the audience is moving closer to the subject. Quick zooms and pans often look blurry, and they can strobe. At 24fps, a quick pan looks even worse. Not all zooms are bad, they just need to be motivated by the narrative needs of the story. Instead of zooming in on a subject, cut from a medium shot to a close-up. While this sounds counterintuitive, it is what filmmakers and editors have been doing for over a century because the viewer’s eyes and the imagination will connect the dots and create the rest of the zoom in their mind. This is the real power of the language of film, and you should employ it wherever possible.
Best Practices While many veterans will tell you to be organized, they don’t often tell you how to be organized. The following is a collection of tips and advice for the first-time filmmaker to follow. While some advice means another purchase, some of it is free and requires only some common sense. Preroll and Postroll Whenever shooting, it is always a good idea to let the camera roll for 10 to 15 seconds before and after a shot ends. This practice prevents you from accidentally cutting the action before it ends. For example, after the last question in an interview the subject might say, “I forgot to mention...” and then begin to speak from the heart. By letting the camera roll a little longer, you give the subject the chance to say something else which could be a gem. You will also have the option to use the piece as one long take rather than having to cut it together. Having room at the beginning and end of each take makes logging and editing easier too because the extra time pads each clip from one another and leaves plenty of room to log each clip as a separate clip.
Chapter 3: Cinematography | Camera Craft
109
Labelling and Blanking Tapes Before the Shoot Before I do a shoot, I take all the tapes out that I think I am going to need and unwrap, blank, and label them. I do this well ahead of arriving at the shoot. You will often not have time to do this when you arrive at a shoot, and it’s one less thing to worry about. I label the paper insert label as well as an adhesive label for the tape’s spine. My naming scheme is to concatenate the project name to four letters, adding an underscore and a three-digit numerical extension. This will become the tape name. Tape names are crucial for log and capture when you need to recapture material from tape. Having a system makes your editing bins easier to decipher and the process of recapturing efficient. I will write basic log notes on the cassette’s paper label. It might be something like, “Chicago, Company Name: Customer Interview.” Organizing All Your Gear Don’t stash your camera anywhere you can find space. Designate space for storing equipment, and invest in hooks, plastic boxes, freezer storage bags, plastic and Velcro ties, and a permanent marker. When these cables are organized, you are less prone to lose them, you find them more quickly, you have more time to focus on the shot, and you look professional.
Camera ready to go
Before a long day of shooting, I will prerecord bars and tone on tapes and put the first tape in the camera so it is ready to go. If I’m shooting with a shotgun, I’ll mount that and set the audio channels to power the mic and record on the channels. I’ll put a fresh battery on the camera or hook the power supply up. By doing all this, I can reduce my setup time dramatically. Additional gear
Tapes, lens cloth, batteries, and power supplies are all neatly tucked away. The white balance card, wireless audio kits, business cards, directions, and anything else go in the other compartments. My laptop can also fit in the long pockets outside.
Figure 33: Well organized gear ready to go
Cables Don’t throw all your cables together in a box, because they will become a thicket of twisted, tangled wire. Begin by assembling all the AV cables you use with your video camera. In most cases, this should be a FireWire cable, an S-Video cable, and a set of stereo RCA cables. Fold each one up individually and use a plastic twist-tie to keep it folded. When these main cables are tidy, put them in a freezer bag that has a resealable top and a markable surface. Label the bag “camera cables” and put it with the camera’s carrying case. All other cables should be sorted by type, folded, and then placed in a labeled bag. For instance, I have separate bags for FireWire, USB, and S-Video cables.
110
Chapter 3: Cinematography | Camera Craft
For longer, heavier cables such as XLR cables or extension cords, I have found these Velcro ties made by Rip-Tie (http://www.riptie.com) worth the investment. By placing these on the end of a cable, it makes securing them so much easier once the cables have been folded. Another important tip when folding cable is to learn the trick of folding cable efficiently.
Folding audio cable
Wrap the cable as if it was a lawn hose. Connect the ends together and also use a strap to fasten it. Here I’m using a Rip-Tie connector.
Figure 34: Folding cable properly
Packing Gear Spend some time figuring out your requirements for carrying and transporting gear. If you are going to be travelling, does your case fit in an overheard compartment? If you plan to check your camera in, can it protect your gear from baggage handlers that will toss the case like a bag of laundry? Pelican cases are great for customizing since the foam can be pulled out easily and it is perforated into half inch cubes. Packing Gear Spend some time figuring out your requirements for carrying and transporting gear. If you are going to be travelling, does your case fit in an overheard compartment? If you plan to check your camera in, can the case protect your gear from baggage handlers who will toss it like a bag of laundry? Pelican cases are great for customizing since the foam can be pulled out easily, and it is perforated into half-inch cubes. Label Everything Investing in a $20 label printer is perhaps some of the best money you can spend. I have labelled all my gear with one, and it has saved me time when looking for matching portable audio receivers and transmitters. Labelling your carrying cases is a no-brainer, too. Since some people are actually honest, you have a greater chance of getting stolen or lost equipment returned to you if you label it. Keeping inventory and manuals handy Keep all the manuals for your gear in one place. I have a hanging file where I keep all manuals for audio, video, and support gear. I keep the manuals for my wireless microphones and camera inside the camera case between the foam padding and the hard plastic case. Always look for PDF
Chapter 3: Cinematography | Camera Craft
111
versions of the manuals. In case you lose the printed one, you can have the PDF version printed and bound at a copy shop. Camera Production Tips Treat the camera you own or rent like any other piece of expensive equipment—with a lot of care. Store it in a protective case with a hard shell and interior padding, or in a soft case with a lot of interior padding. Never store it next to sharp objects, and keep it in a cool, dry place away from heat and humidity. When travelling, you should definitely not just put it in a suitcase and think the padding from your socks will cut it. I had an art teacher in college who said when he sent his paintings out for a show, he would pack the pieces as if they were going to go on the Titanic and would be found years later unharmed since they were packed so well. While you may not have to go those lengths, your camera’s case should provide protection from accidental jolts and the elements. Don’t Use the Camera as a Deck (no rewind) While tape-based video cameras can serve as a capture device, the transport mechanisms are not as robust as the mechanism in a video deck. For this reason, it is best to have both a camera and a deck. When I finish a tape, I don’t use the camera to rewind it. I put it back in the case, and when I begin logging from it, I will log the entire tape as one clip even if I don’t capture it in the end. When I put the tape in the deck and begin to log and capture in my non-linear editor (NLE), I will set the out-point and then rewind the whole tape. I then set the in-point just before the bars and tone end. If you want to put even less wear and tear on your deck, or if you are going to capture with the camera but want to reduce the amount of rewinding it will do, purchase a tape rewinder. They are around $20 and are a great investment. They don’t read timecode and only rewind tapes. Preparation and Setups Make sure the camera is on a tripod and level. Tighten the tilt head, or you could be in a for an unpleasant surprise if you apply any pressure up or down during a pan. The batteries should be fully charged. I have at least three batteries on hand in addition to the AC adapter. Whenever possible, plug into AC power to conserve battery life—you never know when you will need the battery to be full because no outlets are nearby the action. The Steps Involved in Completing a Setup 1. First, block the shots.
» Decide where the actors should be in the shot relative to the camera. This can be done using stand-ins, people who double for the talent. Create marks for actors who move in the shot. This is the place where they start, end, and any place in between. » Choose the shot size and camera movement. Will the shot be a close-up or medium shot? Will the camera be stationary or will it be a dolly move? Refer to the shooting script. 2. Position the lights and check for exposure once the talent and camera positions are defined. Look
at the monitor and make sure the exposure is between 8 and 100 IRE. Make adjustments as
112
Chapter 3: Cinematography | Camera Craft
necessary. 3. Rehearse. Do a few run-throughs with the actors. 4. Shoot the shot. This isn’t as easy as pressing the Record button. There’s a protocol that you’ve
probably heard in films dramatizing production. Here’s the sequence: » “Quiet on the set!” is said by the assistant director. Take her seriously when she says this. » “Rolling!” is said by the cinematographer when the camera has begun recording. » “Speed!” is said by the sound mixer when the sound recorder has begun to record. Given the built-in audio recording on most DV cameras, this is done when the production is using a second sound recorder as backup. At this time all cast and crew should be at their marks and ready to begin. » Scene and Slate. An assistant camera operator reads the scene information and then claps the slate in front of the camera. The audible clap serves as an audible cue for synchronizing audio with the picture in the editing room. Cameras such as the AJ-SDX900 offer jam sync. That means the slate, camera, and second recording unit can have synchronous timecode, which makes synchronizing sound and picture even easier. » “Action!” is said by the director when action should begin. » “Cut!” is said by the director when action and recording should stop.
Shooting for Post Processing Effects The following several sections describe things to keep in mind when shooting for post effects and are also good practices to do anyway. Reduce or Turn Off Detail or Sharpening A camera’s detail or sharpening setting is often used to boost sharpness. While this may be fine for footage that will not undergo any postprocessing, it is not recommended if you plan to use a product such as Red Giant’s Magic Bullet or Nattress Film Effects, or if you are planning on doing a film out. When the DV codec encounters sharpening, it creates additional compression artifacts known as ringing or moires, and overly sharp images are a telltale sign of bad video. It’s hard to make bad video look like film. On the Panasonic DVX-100, turn down Detail and Vertical Detail to very little or none. On the Canon XL2, turn down Edge Sharpness and Coring. Exposure is Everything Exposure Is Everything The DV codec, like most digital codecs, is not kind to blown-out whites. While this may be the look you’re going for, you are far better off doing this in post, where you have more control over the entire image. Keep your brightness values below 100 IRE or at least turn on the zebras on your DV camcorder. Shoot a stop or two down when the highlights begin to clip. In general, it is
Chapter 3: Cinematography | Camera Craft
113
better to shoot the image slightly underexposed (and I stress slightly) and crank the brightness up later in post. You should use a graduated neutral-density filter when shooting outdoors in bright sunlight. Shooting without one will blow out the sky and make the subject appear to be backlit. Stopping the entire image down with the iris or the camera’s neutral-density filters will dull the image indiscriminately. A graduated ND filter contains a translucent gradient in the glass that cuts the brightness progressively less from top to bottom. This brings the sky under 100 IRE while not under exposing the subject. If the subject still appears backlit, a bounce card or reflective disc can serve as a fill light. The DV codec is equally unforgiving when it comes to dark, severely underexposed images. When a dark image is recorded to tape, the codec crushes the shadow detail and creates dark artifacts that are both muddy and blotchy. When you try to adjust the levels, these artifacts are impossible to repair. Again, look on the waveform monitor and be prepared to throw another light on the set, or shoot at a different time of day when more light is available. It should go without saying that you want to get the best unadulterated exposure you can and avoid having the camera’s codec, poor light, or a Gaussian filter screwed onto the camera’s lens make artistic decisions for you. If you shoot an image that is balanced and properly exposed, you will have far more creative options available to you in post and your footage will look better on film, DVD, or on the Internet. Shooting for Continuity Before we move on, the last advice is to shoot all takes in the same scene in the same way. This means having the same lighting conditions and the same camera settings (such as exposure and white balance) in each shot for foreground and background elements but more importantly for skin tones. When you color-correct or apply film looks, you want predictable results across all your shots. This means after setting up lights and getting everything just right, take a white-balance setting, save it, and don’t touch it until the lighting changes. The continuity director and gaffer should be in cahoots and have marked down everything needed to relight the scene should a fire alarm go off and you have to finish the shoot next week.
Shooting Blue and Green Screen Process photography is shooting a foreground element such as an object or talent against a color, normally blue or green, for creating a composite with a background plate. For example, you cannot afford to shoot your talent in front of the Eiffel Tower, so you shoot them in front of a green wall. In postproduction you key out (remove) the green color, and you are left with only the talent which you can superimpose on top of a picture of the Eiffel Tower. 24p cameras are excellent choices for shooting process photography because they shoot in progressive mode, which makes compositing much easier than interlaced footage. Since progressive footage keeps all the information in a single frame intact, it’s easier to pull a decent key. When motion is split within a frame across two fields of video, it’s much more difficult to pull a clean key since the motion is slightly stuttered.
114
Chapter 3: Cinematography | Camera Craft
Now, it’s not like a 24p camera will pull perfect keys, because there are other things that factor into the equation such as lighting, compression, and the subject, which are discussed in the next few sections. Backdrop Options You can shoot talent against paper or fabric backdrops, against painted walls, or you can use a combination of portable backdrops and walls. Backdrops are smaller and transportable. They offer larger spaces but require more care to keep clean and require dedicated space. Paper is cheaper than fabric, and paint is even cheaper than paper if you are painting on an existing wall and are not building a platform. Framed flexible fabric backdrops will run from $150 to $400. Rolled fabric will run $20 per yard for a roll that is five feet wide. A 9-foot by 3-foot roll of green paper is about $50 dollars, but the stand for holding the paper costs about $150. You can find resellers of blue and green screen paint, backdrops, and kits online,by searching the Internet for “blue screen material.” Lighting Issues All lights should be the same color temperature. Lighting a greenscreen with varying color temperature will create subtle uneven color shifts on the screen that can confuse the keying software. For example, if you mix tungsten and daylight balanced lights, the screen will cast a orange-yellow tint. If you use tungsten lighting on a shoot in broad daylight, it will cast a blue tint. In either case, you don’t want the lights to add color to the screen. That is going to make things difficult to key, since the keying software is looking for blue or green and not for some new color created by mixing lights of different color temperatures. The screens should also be lit as evenly as possible. An evenly lit screen is easier to key because the screen appears as one solid color. When you don’t light the screen evenly, you have to create garbage mattes and adjust the white and black points before keying, which means more work and an overall decrease in the quality of the matte. If you are paying someone else to do this work for you, the less work you give them means the less money you have to give them and the better the final composited shot will look, so don’t get lazy and assume it can be fixed in post at no cost.
Chapter 3: Cinematography | Camera Craft
115
A good key produces a clean composite.
A bad setup makes compositing hard.
Figure 35: A good and poor green screen set up
Subject Considerations The foreground elements should never be close to the screen for two reasons: to prevent cast shadows on the screen and to prevent spill, when colored light reflects off the screen and falls onto the subject. Subjects should not wear colors close to the color of the screen, or their clothes will key out. (If you want parts of them to key out, then go ahead and have them wear green.) This also goes for objects. The object should not have a color similar to that of the screen. Also, try to avoid fully transparent objects if you are shooting in 4:1:1, such as the DV codec. They simply would produce a good key. Shoot in a format with more color and detail. 116
Chapter 3: Cinematography | Camera Craft
Compression If your are only shooting a handful of color process shots, you may want to seriously consider renting a camera that has better color sampling and a higher bitrate for only these shots. The extra color and detail will create cleaner composites ad require less effort for difficult keying jobs, and so a day or two of rental may pay for itself in post. The Panasonic SDX-900 and HVX-200 are capable of recording color at 4:2:2 and media at 50 Mbps (the HVX-200 also shoots DVCProHD at 100 Mbps) and both cameras shoot in 24p. In contrast, the XL2 and DVX-100 record color at 4:1:1 and have a bitrate of 25 Mbps. The SDX900 and DVX-100, for example, can be matched with a little effort and the extra quality will make footage look a lot better. Cinematography Issues When shooting process photography, do not use any diffusion or colored filters on the camera lens. Diffusion filters will destroy detail required to pull a good matte.
Shooting for Streaming Media If all you intend is to post a video clip on the Internet, it’s strongly recommend that you tailor your production and editing methodology to optimize for this delivery format. You want small, continuously playing media that loads quickly and looks its best given the preceding constraints. And although broadband is reaching mainstream levels, there will always be the need for quickly loading video with a small footprint, such as video for cell phones, or small talking-head instructional videos that are part of rich Internet applications (RIAs). Following is a list of optimization tips to make Internet streaming video look good while loading quickly. Consider Limiting the Detail in Each Shot File size increases as there is more detail or motion in each frame. It goes without saying that a frame with a subject in front of a solid color compresses more than a frame with the same subject in front of moving machinery. However, a subject in front of a static field of color is boring to watch. A compromise would include an establishing shot of the subject in front of the machinery followed by the subject in front of a simple background. Conversely, the edit could start with the subject against the simple background with a few meaningful cutaways to the complex scenery with the subject. Another method for limiting detail is to use a shallow depth of field. Bring the subject into focus and have the background in soft focus. This will instruct the compression software to preserve detail in the foreground. Having the background appear soft and out of focus reduces the chance of motion artifacts. Use the Best Gear You Can Afford Obviously if you’re reading this book, you are most likely going to use a 24p camera with three CCDs, which fulfills the first two rules of streaming media cinematography: shooting progressive and shooting with a three-CCD camera. Shooting in 24p advanced does an even better job of preserving frame detail and requiring less compression. Shooting 24p will help tremendously since progressive footage is easier to compress than interlaced footage and since 24fps means six fewer frames a second to compress. The other tips are to use a high-quality microphone and to record
Chapter 3: Cinematography | Camera Craft
117
audio at 48kHz. Keep the microphone close to the subject and maintain proper audio levels so it’s not hot.
Shot 1
Shot 2
Figure 36: Simple backgrounds make for smaller movies
Get Good Exposure and Light Softly Footage with soft and even light compresses better than footage with hard edges created by shadows or overbright light values. Soft light can be achieved by applying diffusion material to the lights or by applying a soft box to the key light.
Key only
Key, Fill, and Rim
Key and Fill
Key, Fill, Rim, and Back
Figure 37: The process of adding lights to a scene
Don’t Sweat Title and Action Safe Zones If your only distribution medium is the Internet, there is no need to frame shots to fall within the action-safe or title-safe areas because it does not get cropped like video on a television with a cathode ray tube. 118
Chapter 3: Cinematography | Camera Craft
Figure 38: With Internet video, title and action safe areas are not an issue and you can frame as you please
Motion in a shot is easily controlled by shooting on a sturdy tripod. Time your pans correctly and never do whiplash pans or zooms as they are the telltale sign of amateur cinematography. Refrain from using handheld shots for every shot. Unless you are a super-steady shot, shoot with a camera stabilizer or a dolly.
Documentary Cinematography Given the new world of possibilities that relatively inexpensive 24p cameras provide to the new documentary filmmaker, I’ve included a short section on 24p cinematography for documentaries. Unlike narrative cinematography, documentaries don’t have a script that facilitates easy preparation of a shot list, so the documentary filmmaker is challenged to create this material—as he goes, usually. Creating a Shot List Shot lists are not just for narrative films. As your treatment is written, make a list of what you need to tell your story. Sometimes the shots you need will be specific, but other times you will need to overshoot in order to find the bits and pieces that will make up your film. First, find the people who can communicate your positions and capture the situations (shots) as they do this. You might find story elements in a subject’s speech, body language, and facial expressions, but you will most definitely find it in his actions. Narrative and documentary film are based upon observation because action, not words, is what drives a film. It’s not what people say but what they do.
Chapter 3: Cinematography | Documentary Cinematography
119
Documentary Interviewing Styles Interviews constitute the bulk of shots in a documentary. Cinema verite, archival footage, panning stills, or more creative montage make up the rest. Interviews fall into two general categories: the informal stand-up interview and the more formal sit-down interview. A hybrid of the two is the tour, where you follow the subject around a location. Lectures are not considered interviews, but I’ve included a few pointers on shooting those. Stand-Up Interviews Stand-up interviews are the on-the-spot kind one sees in front of the courthouse. They are cheap in the sense that questions are thrown out, and they are unscripted and unplanned. These interviews are good for a quick opinion or sound bite, but you cannot get the depth in these that you would get from a sit-down interview. Since these are often “off the cuff,” frame the shot as best you can and make the best use of available light. A stand-up interview can be anywhere, but is probably best and more meaningful if it is shot in a location that plays a part in a film, like a controversial location or a place the subject has a historical connection to.
Shot 1
Shot 2
Figure 39: Stand up interviews are best where the subject has a connection to the material
Sit-Down Interviews Sit-down interviews yield more reflective responses not only because the subject knows the questions you will ask, but also because he has made a commitment to be interviewed and, more often than not, has something substantial to say. In either interview situation, you have to ask yourself as a filmmaker and cinematographer what kind of material you are trying to get and what role the subject plays in the film. The setting for a sit-down interview is almost always removed from the action in a documentary film. It is in a library, a person’s home, or an office. This artificial distance between interview and action can be useful for pacing a film since it creates time for the audience to pause and reflect upon the film’s action and interviews.
120
Chapter 3: Cinematography | Documentary Cinematography
Shot 1
Shot 2
Figure 40: Sit down interviews are more thoughtful and reflective
A common type of sit-down interview is with the expert or person who has an informed opinion on the subject matter. Expert interviews are for information and should try to be impartial even though what they say might support or dispute a film’s position. Lectures are a good example of where to get expert opinions if you cannot get the expert to commit to an interview. When framing expert interviews, set these as a medium close-up. The other more common type of interview is the personality interview. The subject in this interview is part of the documentary. These tend to be more emotional since the subject’s personality and feelings emerge in the shot. When framing personality interviews, especially for subjects with notoriety, shoot tighter close-ups to capture expressions and emotions. Interview Cinematography Suggestions • Choose settings carefully. Scout the interview place in advance at the time of day that the interview will take place. The background should be out of focus and slightly darker than the foreground. Spots of highlights can be good, especially in the eyes. • As a general rule, subjects should not wear white, black, red, or stripes. A white shirt can be blown out and requires careful lighting which might not be possible. Black is bad because there is no lighting and all detail is lost in this area. Stripes can cause moires and other visually distracting artifacts. • Distance between the background and the person speaking should be as great as possible. There should be more depth of field in video. The background should be out of focus if possible. • An alternative to the sit-down interview is to have the subject flip through scrap books, photo albums, or historical news clippings. This often adds beauty to a documentary film and adds punch to what is being said. • Consistency in the look is key. While documentary filmmaking is not as stringent as narrative film, continuity and some level of production value will keep the audience focused. Capturing the essence of a scene depends entirely on where you are in a scene. Understand what’s going on and learn to take advantage of the scene for angle changes (anticipation).
Video Engineering Now that the fun stuff is over, it’s time to discuss the “Organic Chemistry” of 24p cinematography, video engineering. The basis of the filmmaking process is increasingly more about video Chapter 3: Cinematography | Video Engineering
121
production technology. Because the frame rate for feature work is 24fps progressive, the task of analyzing picture quality requires an understanding of video and digital imaging technologies. It behooves the young filmmaker to learn the ins and outs of this format because a new profession, that of the digital imaging technician (DIT) has resulted from merging the responsibilities of the video engineer with digital imaging skills. Learning this super set of skills and marketing yourself as a DIT is a very viable way to break into the industry. While you will not be forced to drop out if you fail video engineering, your pictures will look a lot worse if you don’t get the basic concepts. I’ve tried to keep this section brief, and I’ve included as many informative diagrams as I can to help get the concepts across. In short, video engineering has two goals: achieving proper exposure and color balance.
Key Differences Between Film and Video Latitude, or the ability to render contrast, is the most crucial difference between film and video in terms of image quality. Film is known to handle seven to eight stops, which gives the film stock a contrast ratio of either 128:1 or 256:1. Video cameras on the other hand, have only about five stops, which yields a contrast range of 32:1. Because of the reduced ability to maintain effective contrast, it is paramount that you monitor the image quality created by the camera and control exposure through lighting, gelling, and stopping down as needed. The ability to render detail in the shadows and in the highlights is the other area where film and video differ greatly. Film tends to preserve more information in the shadow and highlight areas, whereas video cannot record a lot of detail in those areas.
Camera IRE IRE (Institute of Radio Engineers), units refer to a scale for measuring a video signal’s luminance values in terms of voltage. Legal values run from 0–100 IRE, where 0 is black and 100 is pure white. Actually, broadcast standards require 7.5 IRE to 100 IRE, but DV cameras start at 0 IRE. Setup, or black level, is the IRE value where black begins. So production cameras shoot video beginning at 7.5 IRE, and consumer cameras shoot with setup set to 0 IRE. When shooting video, you will want to shoot scenes that do not exceed 100 IRE. A physical or virtual waveform monitor is used to measure the video signal and should be used on the set when possible.
122
Chapter 3: Cinematography | Video Engineering
Maintaining Proper Video Levels
Keep the values between 7.5 and 100 IRE.
Figure 41: Waveform display
The Need for a Production Monitor A production monitor helps you proof four qualities on set and in the editing room: 1. Focus 2. Exposure and color balance 3. What’s really in the frame 4. Screen aspect ratio (important for widescreen footage)
Proofing Focus Having a production monitor on set is the best way to ensure correct focus, because neither the camera’s flip-out display nor the electronic viewfinder (EVF) has the correct resolution. While the camera may be recording 720 × 480, the LCD is barely half that in size. That means the LCD is not showing half the picture information and what appears sharp on the LCD could actually be soft and fuzzy when previewed on a monitor or in an NLE. Proofing Exposure In addition to focus, the flip-out display and the EVF will not display proper color balance or exposure. The electronic viewfinder may be black-and-white, or the LCD screen may be too bright or not bright enough. For these reasons, a production monitor that has been calibrated properly helps a lot. Tutorial: Calibrating a Production Monitor In order to follow and complete this tutorial, you will need a camera such as the DVX100 or the XL2, a production monitor, and an S-Video cable. Use a calibrated NTSC production monitor when editing and color correcting video. In this case, output colors bars from your NLE and then calibrate the monitor. Chapter 3: Cinematography | Video Engineering
123
1. With the equipment turned off, connect the camera to the production monitor using an S-Video
cable. 2. Turn on the monitor and then the camcorder. The monitor should be set to display the camera’s
output. If it is not showing the output from the camera, consult the manual for the monitor. • Since you will be making slight adjustments and looking for subtle brightness changes, the monitor should not have any glare. If you happen to be shooting on a sandy beach on a sunny day, you have a few options. Throw a blanket around the monitor, cut and tape together black fourply museum board for use as a sunshade, or purchase a sunshade Porta-Brace or Hoodman. 3. Now we need to output NTSC color bars from the camera.
• On the DVX100, press the Menu button and then press down once to select the Camera Setup menu. Press Enter. Press down three times and left once to select the Color Bars option. Press Enter and then the Menu button to turn on the color bars and exit the menu. The color bars should now be shown on the monitor. » On the XL2, press the Color Bars Select button which is on the same side as the Power dial and between the Shutter speed buttons and the Rec Search + and Rec Search - buttons. Then press the Color Bars On/Off button. The color bars should now be shown on the monitor. 4. By the time you have NTSC color bars on the monitor, it should be sufficiently warmed up
and ready for calibration. Now pay particular attention to the three blocks filled with different shades of gray at the bottom-right corner. Each bar represents a brightness value of 3.5, 7.5, and 11.5 IRE. These bars, known as pluge bars (Picture Lineup Generating Equipment), aid in calibrating the monitor’s black point to absolute black in the NTSC color space. 5. Set the monitor’s brightness so that the middle bar, 7.5 IRE, is just about black, and the third bar,
11.5 IRE, is almost invisible. 6. Set the monitor’s contrast level to maximum and then decrease it until the third bar is almost
invisible again. Now you have a monitor with the correct brightness and contrast setting. 7. Now look for a button labeled “Blue Only.” On most monitors it is in the front below the screen.
What you will see is a series of gray bars representing the blue channel of color. When calibrated, the bars have the same tonal values from top to bottom. If they do not, begin by setting the monitor’s chroma levels until the two outer bars are continuous in tone. 8. Then set the monitor’s phase so that the inner gray bars are continuous in tone. The monitor
should now have proper color balance.
124
Chapter 3: Cinematography | Video Engineering
Color Bars
After connecting the camera to the monitor, you output color bars from the camera and calibrate the monitor using the settings on the monitor.
Figure 42: Sending color bars to a field monitor from the camera
Proofing Aspect Ratio and Framing A production monitor helps you to see a widescreen image properly. With the DVX100 outfitted with the anamorphic adapter this is an important issue, since it does not render 16:9 properly in the EVF or flip-out panel. Most production monitors have a 16:9 switch that will squeeze the image on-screen and give an accurate preview of the widescreen image.
Using Scopes Because video has significantly less latitude than film, monitoring the camera’s output signal is a crucial step before recording the final scene. Reading Zebras Examining exposure is typically done by switching on “zebras” for the camera’s LCD or viewfinder. A zebra, a black-and-white diagonally striped pattern, is overlaid on top of video that is typically overexposed. The menus for both the DVX100 and the XL2 also allow the user to configure the camera to display zebras when areas in the image reach a certain brightness value. Closing the iris when zebras appear is a quick way to prevent harsh clipping in an image’s highlights. However, zebra detection does not show the distribution of luminance (brightness) or chrominance (color) information in the image. These are important things for the professional or independent filmmaker to know because they are indicative of overall image quality. If you know that the majority of the recorded information is in the shadows, there are things you can do with lighting or the camera’s exposure settings before the final recording (the ideal) or in postproduction (less ideal) to redistribute some of the information into the middle tones and highlights.
Chapter 3: Cinematography | Video Engineering
125
Waveform Monitor A waveform monitor displays the exposure for an entire video frame as a graph ranging from 0 to 120 IRE. (It can display exposure in other forms, but showing the exposure for an entire frame is most useful.) It is the equivalent of the Levels dialog box in Photoshop or a photographer’s light meter. Any shadow detail below 7.5 IRE renders as black. This is known as crushing the blacks. When the color is above 100 IRE, highlight detail is blown out. This is known as clipping the whites. Reading a waveform monitor is done mostly for checking luminance values for video and making sure they are within legal ranges. In addition to referring to IRE values ranging from 0–120, the luminance ranges on a waveform monitor are also broken down into eight zones. Table 3: Waveform Monitor Zone System
Zone
IEE Units
I
Under 7.5 IRE
II
15 IRE
III
20 IRE
IV
40 IRE
V
55 IRE
VI
80 IRE
VII
100 IRE
VIII
120 IRE
Vectorscope A vectorscope displays saturation for an entire video frame on a radial graph. Spikes on the vectorscope denote saturation. When they extend past the legal boundaries, they indicate that the frame is too saturated.
Color Models There are three common models for producing color: the color wheel, offset printing inks, and of particular importance to the filmmaker, the color primaries of light. Subtractive Color Models The color wheel and offset printing inks are both subtractive color models because combining their primary colors at full intensity theoretically creates black. In subtractive color models, adding color creates less color. A subtractive color model produces color by reflecting and absorbing color wavelengths. For instance, when you look at a yellow object, yellow light is being reflected to your eye, and cyan and magenta light are being absorbed. If you are familiar with only one of these models, it is most likely the color wheel. The color wheel shows three primary colors of red, yellow, and blue. Mixing any two colors creates secondary colors, and the colors in between the secondary colors are tertiary colors.
126
Chapter 3: Cinematography | Video Engineering
YELLOW ORANGE Tertiary
YELLOW Primary
COMPLEMENTARY YELLOW GREEN Tertiary GREEN Secondary
ORANGE Secondary
BLUE GREEN Tertiary
ORANGE RED Tertiary
SPLIT-COMPLEMENTARY
BLUE Primary
RED Primary RED PURPLE Tertiary
PURPLE Secondary
BLUE PURPLE Tertiary
ANALOGOUS
secondary colors create tertiary colors. Figure 43: Color wheel and color relationships
Relationships established by the color wheel The color wheel establishes fundamental color relationships. These are monochromatic, analogous, achromatic, complementary, neutral, and split complementary. • Monochromatic colors share the same hue but vary in tint or shade. Tints are created by increasing a color’s brightness or adding white, and shades are created by decreasing a color’s brightness or adding black. • Analogous colors are adjacent on the color wheel and are close in hue. • Achromatic colors are shades of gray, black, and white. • Complementary colors occupy opposite positions on the color wheel. Their hues are 180 degrees apart and can be thought of as two extremes. This effect helps create contrast that is rich and often symbolic. • Neutral colors are formed from a single hue and varying amounts of the hue’s complement. • Split complementary colors are defined by the relationship between three colors: one color and the two that are adjacent to its complement. Just as its name implies, the first color splits the complement and takes the color on either side.
Chapter 3: Cinematography | Video Engineering
127
CMYK Offset printing often uses CMYK (cyan, magenta, yellow, and black) to reproduce color. Each color is used for a specific plate of color. The plates are etched with halftone patterns, and when the plates combine on paper, they create tonal and color ranges.
Cyan
An excellent way to demonstrate how the CYMK color space is subtractive is to adjust the color sliders in Adobe Photoshop’s CYMK Yellow Magenta Color palette. As color is added, the color chip becomes darker and as color is removed, the chip becomes lighter. secondary colors create tertiary colors. Figure 44: Offset printing color model
Additive Color Models Television and computer monitors render color with red, green, and blue (RGB) light. This model is additive because when the three colors are equally combined to their full extent, they create white, which contains the full range of colors. This can be seen when white light is split with a prism or when rainbows are created by the refraction and reflection of light in water droplets.
Red
An excellent way to demonstrate how the RGB color space is an additive color model is to adjust the Red, Green, and Blue sliders in Adobe Blue Green Photoshop’s Color palette. As you add color, the color chip becomes brighter and as you remove color, the chip becomes darker. secondary colors create tertiary colors. Figure 45: RGB color model
When shooting 24p video, the color space is YUV, where color is split by brightness and two color channels. Color Resolution Color resolution is not image resolution. Image resolution is the measure of how fine or detailed an image is. Color resolution is the number of colors available to reproduce an image. The first personal computers had 1-bit black and white displays. In the late 1980s, gray scale displays became more prevalent. Gray scale monitors could reproduce 256 levels of gray when coupled
128
Chapter 3: Cinematography | Video Engineering
with an 8-bit gray scale video card. As computer video technology improved and the first color displays were introduced, 8-bit or 256-color displays became prevalent. Most computers today have displays that support 24-bit color, which reproduces approximately 16.7 million colors. Because each channel of color (red, green, and blue) is given 8 bits of color, 256 x 256 x 256 = 16,777,216 possible colors. In video there is 8 bit, 10 bit, 12 bit, and 16 bit per pixel video formats. Gamut Gamut is the range of reproducible colors for a device. The gamut for a subtractive color model like offset printing is much smaller than that of an additive model like RGB, which is even smaller than that of the human eye.
CYMK gamut
RGB gamut secondary colors create tertiary colors. Figure 46: Gamuts for additive and subtractive color models
Color Temperature Beyond these tactile and creative ways of modelling color, it can also be described by temperature in degrees Kelvin. Table 4: Color temperatures for different forms of light
Type of Light
Color Cast
Sunrise and Sunset
Warm
2000-3000 k
Noon
Blue
5600-6000 K
Tungsten
Yellow/Orange
3200-3400 K
Incandescent
Orange/Red
2900 K
Chapter 3: Cinematography | Video Engineering
Temperature
129
The range of temperatures used to model color temperature is based upon the amount of heat required to get a black body object to pure white. Assuming the object is 0 degrees when it completely black, it changes color until it becomes white hot. White Balance The three primary colors of light exist in any light source but often vary in proportion. For example, fluorescent lighting sometimes has a green cast to it. The distribution of primary colors in a light source varies depending upon its temperature. As a rule of thumb, the higher the temperature, the cooler or more blue the light is. And the lower the temperature, the warmer or more red the light is. Since white is the presence of full intensity for each of the primary colors, it is a good reference for color-balancing the image. The human eye and brain work together to color-balance what is seen all the time. What is white during the day is equally perceived as white at night. A camera, however, cannot do this unless automatic white balance is on, and you should shy away from almost anything automatic. The reason to shy away from automatic white balancing (AWB), is that it is effective only when no filters or gels are used to adjust the color temperature of the light and when natural-looking colors cannot be achieved with auto white balance on. You do not need to white-balance all the time if the lighting conditions do not change. For example, if you are doing an interior shoot and you have several setups with the same lighting arrangement, you do not need to white-balance in between takes and setups. If you were to go outdoors, add or remove a light, or add a gel to light, however, you would want to white-balance the camera again. Tutorial: Setting Up White Balance 1. First set your camera on a tripod and place a white card where your subject will be. This is
important as you want the white card to be under the same lighting conditions as your subject to get the right white balance. In a bind, a white card can be a piece of paper. 2. Zoom in on the white object until it fills the screen and press the white-balance button on the
camera. For the Panasonic DVX100, the white-balance button is a round button below the lens. On the Canon XL2, the white-balance button is below the Aspect Ratio and Frame Rate switches on the side of the camera. 3. The camera screen will go blank for a second, and you should now have a properly color-bal-
anced shot.
130
Chapter 3: Cinematography | Video Engineering
White Balance Card
While this card appears to be slightly blue, it is white. By calibrating the camera’s white point setting to it, the color balance recorded to tape will be more accurate.
Figure 47: Setting white balance on your camera
Understanding a Camera’s Basic Controls A great exercise for any filmmaker is set up a shot and do tests where you try out the camera controls. In the last chapter, I stressed the need to shoot tests with the camera and format for your project before production begins. These tests are the perfect example of such an exercise. Basic audio controls are covered in Chapter Five: 24p Audio.
Controlling Focus On both the DVX-100A and the XL2, focus can be set either manually or automatically. Setting focus manually, however, is always better than relying upon autofocus for several reasons: autofocus cannot truly predict what should be in focus, true autofocus doesn’t exist in 24p mode, the autofocus circuit can be tripped accidentaly by something else in the scene, and autofocus works horribly in low lighting conditions. On the DVX-100, the focus controls are located on the left side of camera after the lens but before the LCD display. The focus control is a movable switch that changes the camera’s focus operation between auto (A), manual (B) and infinity (∞). The infiity setting, in actuality is not a permanent switch but a temporary focus setting that reverts back to manual focus upon setting focus to infinity. There is also a very handy Push Auto button below the switch for setting focus quickly. This option is handy when you are short for time and do not have the time to set focus properly. The focus selector on the XL2 is on the lens on the same side as the power dial. The options are manual (M), and auto focus (AF). The XL2 also offers a quick auto focus feature when managing focus manually. The XL2 also has a repeatable focus function. This is ideal for repeating focus in a series of takes. To use it, set the focus selector to manaul mode.
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
131
Move the Position Preset to “Focus.” Adjust focus to the desired setting with the focus ring. When you change focus and move the Position Preset ON/SET to “Set,” the camera will store this focus setting. Changing focus again and moving the Position Preset ON/SET switch to “On,” will return the focus setting to the previously stored setting. Using Auto Focus is acceptable when there is plenty of light since less light produces a shallower depth of field which can make it difficult for the auto focus circuit to find true focus. Auto focus is not really recommended when shooting in 24p mode since the auto focus circuit is updated less frequently—24 times a second versus the 60 times a second when shooting interlaced at 60i. Manual focus is adjusted by turning the focus ring around the lens with the lens in manual focus mode. Focus can be checked using the electonic view finder (EVF), the LCD display, or ideally an externally connected video monitor. In the EVF and the LCD is a read out displaying the focus in percentages. These percentages have been converted into various metric and imperial units. Customizing the Focus Ring on the DVX-100 The DVX-100 focus ring doesn’t have hard stops and as such it can be hard to repeat focus with the camera. A lockable ring can be attached to the focus ring that allows you to lock the focus ring so you can easily repeat focus and so you can attach a follow focus gear for more finely adjusting focus or allowing a focus puller to assist the camera operator in shooting scenes.
Focus lock Markable surface
Focus drive
Figure 48: The Century Optics focus ring attachment for the DVX-100
Controlling Exposure Exposure on the camera is primarily controlled by the Iris, Neutral Density filter, and Gain control. In daylight situations, the Iris and the Neutral Density filter are the best ways to manage exposure. In evening or dark situations, Neutral Density filters should be off and the Iris and the Gain control are used to control exposure. In the next chapter, using these controls in conjunction with various lighting instruments are covered.
132
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
Zoom Zoom or focal length is set by using the zoom button or by turning the Zoom ring on the lens. Zoom can also be controlled remotely by using a zoom controller or an infrared remote control. Zoom Presets The DVX-100A allows the user to select the preset zoom speeds when using the Handle Zoom switch and the Handle Zoom button on the camera. The Zoom Handle button is located near the hot shoe mount. The switch is on the side of the handle closer to the EVF and on the same side as the cassette holder. The switch has marking for three zoom speeds: 1, 2, and 3, where 1 is slowest and 3 is fastest, and 2 is in between. This switch can be further customized in the Handle Zoom menu (Camera mode: SW Mode > Handle Zoom menu). This menu has two options: L/OFF/ H, which stands for low, off and high, and L/M/H, where M stands for medium. This setting controls whether the switch has low, medium, and high speed zoom settings or has low and high settings. When choosing L/OFF/H, setting the Zoom Handle switch 2, will have no affect when pressing the Handle Zoom button. On the DVX-100, the Handle Zoom setting does not affect the Zoom button next to the Record Check button, which alters the zoom lens by using variable pressure. The Xl2 has two Zoom levers: one on the carrying handle behind the hot shoe, and another above the side grip. The speed for the Zoom lever on the carrying handle is set in the Camera Setup > Zoom Handle menu. In this menu are settings for low, middle, and high. The Zoom lever above the side grip has two speed modes: constant and variable. Constant zooms at a linear or constant rate. There is a dial above the behind the Zoom lever on the side grip for setting the speed of the zoom when the speed mode is set to a constant rate. The Variable setting sets the zoom at a variable rate that is controlled by how hard the Zoom lever is pressed. The XL2 also has a Zoom preset function. This is ideal for repeating the same amount of zoom in a series of shots. To set it, move the Position Preset switch to “Zoom” and the Position Preset ON/SET to “Set.” This zoom level is now stored. When you change the zoom level and move the Position Preset ON/SET switch to “On,” the camera will return to the preset zoom level at one of the constant speeds (low, medium or high).
Scene Files and Custom Presets A scene file is a collection of customizable camera operation settings that are stored internally on a camera. They are like workspace layouts in Final Cut Pro or After Effects in that the user selects from one of the default layouts or from their own custom layouts and the layout changes on the fly without restarting. On the DVX-100, these are called scene files and on the XL2, these are refered to as custom presets. There are six scene files on the DVX-100 and <x> on the XL2. The XL2 has the additional ability to transfer settings from one camera to another using Firewire cables or they can be saved to and loaded from a computer with a Firewire port and supported software. Managing Scene Files and Presets Scene files on the DVX-100 are chosen by rotating the Scene File dial on the back of the camera. Presets on the DVX-100A are managed, however, using the Name/Edit and Save/INT menus Chapter 3: Cinematography | Understanding a Camera's Basic Controls
133
(Camera mode: Scene File > Name/Edit and Camera mode: Scene File > Save/INT). The workflow for modifying a default scene file is to navigate to one of the menus you want to change, change the settings, then navigate to the Save/INT menu and save the changes to the scene file before exiting the menus. The Name/Edit menu is for editing the one of the six scene file names. The name appears on the LCD and EVF after selecting a new scene file. The XL2 has three custom presets that store preferences for fifteen of the camera’s imaging features: gamma curve, knee, black stretch/press, color matrix, color gain, color phase, R gain, B gain, V detail, sharpness, coring, setup level, master pedestal and NR. These custom settings are stored in the camera’s memory and can be set by pressing the menu button and accessing the Custom Preset > Preset Setup > Sel Preset menu. After selecting one of the custom presets (CP1, CP2, CP3), adjust the desired imaging features for the preset and exit out of the menu. To use one of the custom presets, press the custom preset select button on the top of the camera below the top grip and then press the custom preset on/off button. Presets on the XL2 can be shared across cameras. To transfer a preset: 1. Connect the two cameras with a four pin to four pin Firewire cable. 2. Change the power dial on the camera that contains the master presets to “Ext. Cont” and change
the power dial on the receiving camera to a recording mode other than the Easy Recording mode. 3. Press the menu button on the receiving camera and navigate to Custom Preset > Read Preset >
Sel Preset. Choose the preset you want to import from the master camera and press Sel Position. 4. On the Sel Position menu, select the preset to overwrite with the imported preset and select over-
write and yes to confirm the modification. In general , the DVX-100 has settings on a -7 to +7 scale and the Canon XL2 has a scale with ten tick marks with (-) and (+) on each end of the scale. Both cameras also have a few settings with low, normal, or high keywords as settings. Detail Level Detail Level adjusts picture sharpness by increasing or decreasing the contrast between edges of color. Increasing the detail level artificially sharpens the image by increasing contrast. Decreasing the detail level artificially softens the image by decreasing contrast. In most cases, it’s fine at 0, or no detail level adjustment, but softening can help to reduce hard shadows. If you are planning a film out, turn detail off! Artificially sharp edges do not transfer well when uprezzed for film outs. On the DVX-100, reducing Detail Level (down to -7) softens the image and increasing it (up to +7) sharpens the image. The detail setting is accessed by pressing the menu button in Camera Mode and navigating to the Scene File > Detail Level menu. On the XL2, the corresponding menu is the Sharpness menu in one of the three camera presets. Custom Preset > Preset Setup > Sel Preset Menu. Once you choose a preset, select the Sharpness setting and move the setting to (+) to sharpen the image or to (-) to soften it. 134
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
Vertical Detail These settings adjust picture sharpness by increasing edge definition in the vertical direction. It’s affect on the image is not as pronounced as the detail setting since it is only working in the vertical direction. The DVX-100A has the Vertical Detail option and the original DVX-100 does not. Like detail, the default setting is 0 and reducing it (down to -7) softens the image while increasing it (up to +7) sharpens the image. The v detail setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Detail Level menu. On the XL2, choose a preset, select the V Detail setting, and select Normal (sharp) or Low (less vertical sharpness). Detail Coring Detail Coring reduces noise by smoothing image pixels. Increasing the Detail Coring level smoothes out noise and decreasing it does not any noise out at all. Detail Coring is a good setting to experiment with when you use other settings that create noise or overly sharp areas such as high gain or +7 for Detail. The DVX-100A has this setting and the original DVX-100 does not. Reducing Detail Coring (down to -7) does not smooth noise in the image at all while increasing it (up to +7) smoothes a lot of the noise in the image. The Detail Coring setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Detail Coring menu. On the XL2, choose a preset, select the Coring setting and move the setting to (+) to reduce noise or to (-) to increase noise. Chroma Level Chroma adjusts picture saturation. Increasing the Chroma level increases the saturation while decreasing the Chroma level desaturates the image. On the DVX-100, reducing Chroma Level (down to -7) almost removes all color information in the image, making it look like a black and white image with subtle hints of color. Increasing Chroma (up to +7) makes the image more punchy, but beware of colors that may not be NTSC safe such as red. The Chroma setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Chroma menu. On the XL2, choose a preset, select the Color Gain setting and move the setting to (+) to increase saturation or to (-) to desaturate the image. Chroma Phase Chroma Phase adjusts the picture’s color balance along an axis that begins with yellow-green and ends with purple. Increasing it shifts the color balance towards magenta and purple. Decreasing it shifts it towards yellow-green. Chroma Phase does not adjust saturation, it only shifts the color balance and so it is light tinting effect. If you want to make the footage really yellow-green, for example, you’ll have to turn down Chroma Phase and increase the Chroma Level. On the DVX-100, reducing Chroma Phase (down to -7) tints the image in the yellow-green direction. Increasing Chroma Phase (up to +7) tints the image in the magenta-purple direction. The
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
135
Chroma Phase setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Chroma Phase menu. On the XL2, choose a preset, select the Chroma Phase setting and move the setting to (R) to add more red or to (G) to add more greeen to the image. Color Temperature Color Temperature adjusts the picture’s color balance along an axis that begins with red and ends with blue. Increasing it shifts the color balance towards blue or a white balance of a higher color temperature. Decreasing it shifts it towards red or a lower color temperature. Color Temperature is a stronger tinting effect than Chroma Phase. Color Temperature should be used after white balancing for making subtle adjustments to the picture’s white balance. On the DVX-100, reducing the Color Temperature setting (down to -7) shifts the color temperature towards red. Increasing Color Temperature (up to +7) shifts the color balance towards blue. The Color Temperature setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Color Temperature menu. The DVX-100’s color temperature setting cannot be adjusted when using either the 3200K or 5600K white balance presets. It is adjustable when the white balance is set manually. Master Pedestal Master Pedestal, or black level, affects the dark portions or shadow detail in an image. It is called “pedestal” because it is the flat pedestal-looking shape at the bottom left of a gamma curve. Decreasing the Master Pedestal level pulls the dark areas into black (also known as crushing the blacks) and increases picture contrast. Increasing the Master Pedestal lightens the dark areas and gives the picutre a low-contrast look. Master Pedestal can be used as an additional tool to improve overall exposure when a scene is too bright (by decrreasing pedestal) or too dark (by increasing pedestal). The cameras offer greater control with Pedestal by having a greater range of adjustment, but remember that proper lighting will yield better results. On the DVX-100, reducing Master Pedestal (down to -15) darkens the shadow detail and increasing it (up to +15) sharpens the image. The detail setting is accessed by pressing the menu button in Camera Mode and navigating to the Scene File > Master Pedestal menu. On the XL2, choose a preset, select the Master Pedestal setting and move the setting to (+) to brighten the image and reduce contrast or to (-) to darken the image and increase contrast.
136
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
CINE-LIKE CURVE KNEE
Output level
FILM GAMMA CURVE DYNAMIC RANGE FOR FILM PEDESTAL DYNAMIC RANGE FOR VIDEO
VIDEO GAMMA CURVE
Most 24p cameras offer gamma settings that mimic a film curve by adjusting the pedestal and knee portions of the gamma curve.
Scene brightness Figure 49: Pedestal and knee portions of a gamma curve
Auto Iris The Auto Iris setting boosts or lowers the camera’s automatic exposure function. Lowering it is akin to closing the aperture or increasing the f-stop. Raising it is like opening the aperture or decreasing the f-stop. This setting should be set carefully and increasing it should only be used in darkly lit settings because it is very easy to blow out highlight detail by increasing the Auto Iris on a normal or brightly lit shot. On the DVX-100, the Auto Iris setting is between -4 and +4. Different Scene File settings set the Auto Iris differently to protect highlight information. See the chart on page <<x>> to see the default scene file settings for the DVX-100. The Auto Iris setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Auto Iris menu. Gamma The Gamma setting controls how brightness values are distributed in a picture. In imaging terms, gamma is often described in terms of a function of input brightness, the brightness inherent in the shot, versus output brightness, the brightness values actually recorded by the camera. In an ideal world a camera would record brightness at a 1:1 relationship which would result in a linear curve or diagonal line. In actuality, most mediums, whether they be analog or digital based, cannot do this and film has a curve that is S-shaped where the shadow detail slowly ramps up, is fairly linear through the midtones and tapers off in the highlight detail. For these reasons, Gamma, or the way in which brightness is distributed in an image, is discussed in terms of curves. On the DVX-100A, there are seven gamma settings: Low, Normal, High, Black Press, Cine-Like, Cine-Like D, and Cine-Like V. The gamma curve for Low shifts the curve to preserve shadow detail at the expense of highlight detail. High raises the dark portion of the curve while keeping the bright areas at a normal level. Normal produces a gamma curve like a typical video camera where bright values are stretched and dark values are compressed. Black Press stretches lattitude
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
137
in the midtones while compressing the shadow detail a little and keeping the highlight information the same. The DVX-100A has two additional Cine-Like settings than the original DVX-100: Cine-Like D and Cine-Like V. The gamma curve for Cine-Like is modelled after a typical gamma curve for film where overall dynamic range is preserved at the expense of knee protection (protecting bright values). Cine-Like D, where D means dynamic range, makes up for the lack of knee protection in Cine-Like by extending the dynamic range further into the hightlights, but does so at the expense of additional noise. The curve for Cine-Like V takes the Cine-Like curve and stretches both the dark and light portions of the curve to increase overall contrast. A little latitude is lost, but the picture is sharper and more punchy than Cine-Like. The Gamma setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Gamma menu. On the XL2, choose a preset, select the Gamma Curve setting and choose Normal for a typical video gamma curve or Cine for a film-like gamma curve. Knee The Knee setting is used to prevent highlight clipping in a given exposure. It’s called “knee” because it is the bend at the top right of a gamma curve. It does not represent the white point, rather it serves as a point at which white values are: attenuated gradually yielding additional highlight detail, cut off harshly producing clipped whites, or compressed to prevent clipping. Setting the Knee to a low value will begin to attenuate brightness values The DVX-100A has the Knee option and the original DVX-100 does not. The available settings are auto, low, mid, and high. Auto is set automatically based upon the incoming brightness values. Low begins to attenuate brightness at 80% or 80 IRE. Mid does this at 90% and High does this at 100% but remember that the full IRE spectrum is from 0 to 108. Ideally one shoots with the Knee set to high and controls the lighting to get the maximum amount of latitude in the exposure. In scenes that are overly bright, however, setting the Knee to Mid or Low can prevent clipped whites. Note that turning on the knee when it isn’t needed will make the highlights look flat and unnaturally gray. The Knee setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Knee menu. On the XL2, choose a preset, select the Knee Point Adjustment setting and and choose high, medium or low. Like the DVX-100, the XL2’s medium or low will preserve more image detail in the highlights. Whereas high will tend to clip overly bright values. The XL2 also has a Black Stretch/ Black Press setting for each preset and this setting is for adjusting the contrast range in the shadows. The settings are stretch, middle, and press. Stretch and middle bring out more shadow detail by lifting the gamma cutve from pure black whereas press will push shadow detail towards black and loose more detail in the image. Matrix The Matrix settings alter the picture’s color response to accomodate specific lighting conditions. This means that certains colors in the spectrum are enhanced and deemphasized to preserve a natural looking color balance. On the DVX-100A, there are four Matrix settings: Normal, Enriched, Flou, and Cine-Like. Normal should be used when shooting outdoors or under halogen lighting instruments in a studio 138
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
setup. It doesn’t alter the picture’s color response since it’s meant to work with balanced lighting. Enriched is only available in the DVX-100A and increases overall saturation; especially in the warm range of colors (reds/oranges/yellows) and to a lesser degree in the cool range (greens/cyans/blues). Fluo stands for Flourescent and it increases all the colors except for the greens. Since some indoor flourescent light fixtures have a disproportinate amount of green light in them, Fluo deemphasizes green to create a more balanced image. The Matrix setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Matrix menu. On the XL2, choose a preset, select the Color Matrix setting and choose Normal for a normal colors or Cine for film-like colors. Skin Tone Detail The Skin Tone Detail setting reduces wrinkles and skin blemishes by softening areas of color that it perceives to be skin. It’s a subtle effect so try it out and if it doesn’t work, turn it off and fix it in post by duplicating the footage, blurring the duplicate layer, setting its transfer mode to screen, and adjusting the duplicate layer’s opacity until you get the desired softness. To isolate the softness, create a simple mask for the face and feather the edges slightly. On the DVX-100A Skin Tone Detail has two settings, On and Off and it is accessed by in Camera Mode and navigating to the Scene File > Skin Tone Detail menu. On the XL2, it’s accessed by pressing the Menu button and then navigating to the Camera Setup > Skin D Set > Skin Detail Off menu. There are four settings, high, medium, low and off, that range from more softening of skin detail to no softening. Before the amount of softness is applied, however, the type of skin color needs to be determined first. Pressing the menu button and and adjusting the subordinate menus, (Hue, Chroma, Area, and Y Level) under Camera Setup > Skin D Set menu identify the skin tone to soften. Hue shifts the function towards reddish (R) skin or greenish skin (G). Chroma alters the function to soften skin tones that are more more saturated skin tones (+) or lighter skin tones (-). Area increases the amount of skin to soften. Y Level is for picking light (+) or dark (+) skin tones. Vertical Detail Frequency Vertical Detail Frequency is a setting that’s only available in Progressive shooting modes because it’s meant to add a little vertical smearing to the frame to prevent flicker when shown on a interlaced display device such as a cathode ray television. If you plan to only distribute on an interlaced television set, turning this setting on, can be helpful, but this means vertical resolution is lost since the smearing reduces picture detail. Along these same lines, if you are planning do to a film out, detail should be sent to thin which will turn all the smearing off and preserve the most amount of information for a film out. On the DVX-100a there are three Vertical Detail Frequency settings: thin, mid, and high. Thin applies very little smearing. Mid is only available on the DVX-100A and is not available on the original DVX-100. Mid applies some smearing but not as much as Thick. Thick applies the most smearing and makes the 24 fps footage more compatible on interlaced displays. This setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Vertical Detail Frequency menu.
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
139
Progressive or Frame Rate This setting is the what makes cameras such as the DVX-100 and the XL2 truly remarkable—adjustable frame rate and the ability to switch between interlaced and progressive capture. The options are 60i, 30p, 24p standard and 24P advanced. 60i is like a normal video camera. 30p is good for shooting for Internet streaming at a higher bit rate. 24p standard is good if you are going to mix 24p standard material with 29.97 material, and 24p advanced is best for film outs, DVD video, and Internet streaming. The DVX-100 has a Progressive mode where Off means 60i, 30p is 30 fps progressive, 24p uses 2:3 pull-down to fit 23.976 fps video onto a 29.976 time base, and 24p (ADV) uses the syncopated 2:3:3:2 cadence to get true 23.976 onto the 29.97 time base. This setting is accessed by pressing the menu button in Camera mode and navigating to the Scene File > Progressive menu. On the Xl2, the frame rates settings are exposed as physical controls on the outside of the camera. The frame rates are 60i, 30p and 24p. When shooting in 24p, the default method is 24p standard or 2:3 pull-down. To shoot using the advanced method or 2:3:3:2 pull-down, turn the frame rate to 24p and press the menu button and navigate to the Camera Setup > 24p Sel menu. On this menu toggle the 24p shooting option to 2:3:3:2.
140
Chapter 3: Cinematography | Understanding a Camera's Basic Controls
24p Audio Picture conveys information and sound conveys emotion
Chapter 4: Audio | 24p & Audio
141
The Importance of Sound Sound production often gets overlooked by the inexperienced filmmaker because he becomes too enamored with the camera he’s using. He assumes that the microphone on the camera is sufficient. This is a grave mistake because the recording capability of an on-camera microphone pales compared to a dedicated microphone for two reasons: sound quality and versatile placement. An on-camera microphone is not as well made as a dedicated shotgun or lavalier microphone. The camera manufacturer spends more resources on the other components such as the optics or the CCD. Even if the quality of the on-camera microphone is stellar, its placement is rarely close enough to the talent for optimal recording. A shotgun or lavalier microphone can be placed directly above, below, or on an actor, wherever it takes to get the best sound possible. Another mistake is believing that whatever gets recorded can be “sweetened” (fixed) in post-production. Fixing badly recorded sound is about as costly as fixing poorly lit or overexposed video. It simply takes too much time and too many resources. By making the right investments in equipment and skilled personnel, you are more likely to record good sound during the shoot.
The Location Sound Recording Team The team includes a sound recordist who monitors the recording and a boom operator who aims a microphone at the subject. On any narrative shoot, it pays to have both a recordist and one or two experienced boom operators since the shooting style is more relaxed. While it is possible to have one person do the same on a sitdown interview shoot, it should not be done when doing run-and-gun style shooting. In addition, the recordist usually manages the audio tapes, fills out sound production forms, and hands off the final tapes to editorial.
Figure 1: A basic location sound team consists of a boom operator and a sound mixer.
142
Chapter 4: Audio | The Importance of Sound
Audio Equipment Following is a list of equipment for a basic audio kit including explanations. Keep in mind that most things can be rented. Microphones: No single microphone will work in every situation. For that reason, it is good to have a number of different microphones handy and to select the right microphone well ahead of the shoot. A good kit should have at least one shotgun, two wireless lavalier kits, and a handheld omnidirectional microphone. • Headphones: A good pair of headphones should be the first thing you purchase. Look for a pair of closed ear reference headphones. Open ear headphones do not isolate the recorded audio and should not be used. Sony and Sennheiser make excellent monitoring headphones. • Boom pole: This is a long telescopic pole for aiming a microphone at a sound source. Carbon fiber poles are lighter than those made of alloy. Some boom poles come threaded with XLR cable, but this isn’t necessary. Stay away from broomsticks and rent one if purchasing one is not an option. • Shock Mount: A shock mount prevents noise caused by moving the microphone around. • Zeppelin and Windscreen: A zeppelin is necessary for recording outdoors and even indoors if there is a lot of noise created by air circulation, for example. A fluffy windscreen blocks out even more of the unwanted noise. A few companies such as Rycote make an all-in-one shock and zeppelin and wind screen mount. • XLR cable and adapters: Always bring more cable than you need. Cables can go bad, be too short, or be the wrong variety. • Sound blankets: These dampen unwanted noise and echo. • Mixer: This allows for the best audio amplification and level adjustment. It also helps mix several sources or keep sources clean and separate if recording to a multi-track sound recorder. • Sound recorder: A recorder that acts as a second system for recording sound is insurance against tape dropouts and in some cases allows for recording at higher bit and sample rates. A recorder is also great for doing ADR and effects recording. • Production carrying case: An over-the-shoulder carry case is a necessity for the recordist working with a mixer and recorder. Porta Brace makes cases for nearly every model that are incredibly well made. Often, these are part of the rental package for a mixer or recorder. • Clapper: This is crucial for second system sound as it serves as an audio cue point for synchronizing picture and audio in post. • Miscellaneous audio gear: Gaffers tape, boom pole mounts for C-stands, plastic crates for hauling equipment, plenty of spare batteries, and a clipboard with plenty of extra sound production forms round out a solid audio kit. Label everything, keep manuals handy, and pack gear like you want it to last a lifetime.
Chapter 4: Audio | The Importance of Sound
143
wireless units
omni handheld microphone XLR cable
windscreen and zeppelin
stand
mixer shockmount
shotgun microphone
boom pole
Figure 2: An audio kit ready to go
Microphone Characteristics Like a camera’s image sensor, a microphone’s function is to convert what occurs in the real world, namely acoustical pressure caused by vibration, into an electric signal. The electric signal is then converted into a digital signal and stored on tape, a hard disk, or solid state memory. Often a microphone is chosen because someone likes the sound it creates. There are, however, more objective ways to select a microphone, such as its sensitivity, frequency response, polar pattern, dynamic range, impedance, and form factor.
Sensitivity A microphone’s sensitivity measures its ability to pick up sound. While greater sensitivity is better, actual performance gains can go unnoticed if pre-amplification or recording levels are not suited for the sound source. Sensitivity is less important than frequency response, polar pattern, or dynamic range when selecting your mic.
Frequency Response Frequency response is a microphone’s ability to capture sounds from varying frequencies. Sound frequency refers to pitch: how high or low a sound is in Hertz (Hz). For example, a high sound would be a whistle and a low sound is a chord on a bass guitar.
144
Chapter 4: Audio | Microphone Characteristics
Table: Comparing sound frequency
Frequency (hz)
Sound
20,000 or higher
Sounds audible to only bats and dogs
20,000
Maximum audible frequency for the human ear
10,000
Highest frequency possible for a female voice
8,000
Highest frequency possible for a male voice
440
“Concert A” (the note used for tuning an Orchestra)
180
Lowest frequency possible for the female voice
100
Lowest frequency possible for a male voice
20
Minimum audible frequency for the human ear
20 or lower
The sounds are not heard, but sensed such as a low bass vibration
A microphone can favor high or low frequencies or be impartial. When it favors high pitch sounds, it excels at recording thin and hollow sounds. When a microphone has a low frequency response, it excels at recording low and deep sounds. A microphone that records both equally is said to have a flat frequency response.
Polar Pattern Polar pattern, also referred to as a pickup, directional, or sensitivity pattern, is the area surrounding a microphone that defines the microphone’s sensitivity. For example, one microphone can pick up sound from every direction while a different one picks up sound inside a narrow angle extending out from the front. The angle at which the microphone is most sensitive is its axis and anything outside this angle is off-axis. Side rejection is a microphone’s capability to reject off-axis sound. Rear pickup occurs with some directional microphones that have reject sound from the sides and do not block sound from the rear. The four common polar patterns are omnidirectional, cardioid, supercardioid, and hypercardioid. The last three are known as unidirectional.
Chapter 4: Audio | Microphone Characteristics
145
0°
270°
0°
Omnidirectional
0
0
-10 -20
-10 -20
dB
270°
90°
0°
0°
Cardioid
0
Interference Tube (shotgun)
0
-10 -20 dB
90°
180°
180°
270°
dB
Hypercardioid
-10 -20
90°
180°
270°
dB
90°
180°
Figure 3: Polar patterns
Omnidirectional An omnidirectional microphone, also referred to as an “omni,” picks up sound in all directions. Depending upon the size of the microphone, its casing can influence the pick up pattern a little, but almost never to a degree that would cause problems. Since they can be built small, they are used in lavalier and handheld microphones for interviews. An omni is ideal if several voices are the same distance from the microphone and are speaking at the same level. Television news correspondents use handheld omnis because these will pick up the subject’s voice as well as the voice of whomever they are interviewing on camera. An omni is not affected by proximity effect, an effect where low level sound frequencies, or bass, are boosted when the microphone is too close to a sound source. Conversely, there are situations when an omni should not be used, such as when background noises are loud enough to be picked up and in rooms with reverberation caused by hard floors and walls. The first situation is a problem because omnis pick up sounds from all directions. To overcome these conditions, place the microphone closer to the subject and turn down the recording level until the background noise or reverb is not picked up or is at an acceptable level. Putting down foam or sound blankets (any thick blanket will do) where they are not seen by the camera will absorb reverberation too. Cardioid A cardioid microphone gets the heart sounding name (cardio) from the shape of its pick up pattern, which is heart shaped. This pick up pattern records sound from the front and sides and rejects sound from the back. They are also known as unidirectional microphones and are less sensitive to vocal popping and good for recording room tone. A cardioid rejects sound from the back and sides by cancelling out the sound. On the side of the microphone are holes where sound enters the camera from both sides and cancels out.
146
Chapter 4: Audio | Microphone Characteristics
Supercardioid and Hypercardioid Supercardioid and hypercardioid microphones are much more directional and offer increased side rejection than cardioids. While they isolate important sounds and reject unwanted surrounding noise, their narrow sensitivity means that they must be continuously pointed at the subject to assure consistent sound levels. It will also be noted from the polar patterns that, while these styles have a narrower coverage pattern than a cardioid, they also begin to pick up higher frequency sound at the rear of the microphone. Unidirectional microphones are good at recording what is directly in front of them while rejecting background noise. This makes them ideal for isolating talent onto separate tracks when a microphone is paired with each actor. They also boost bass or low-level frequencies when close to the microphone.
Impedance Impedance rates the amount of resistance, measured in ohms (Ω), a microphone has to an electric audio signal. Low (less than 600Ω), medium (600Ω - 10,000Ω), and high (greater than 10,000Ω) are the three designations for microphone impedance. A single unit quarter-inch cable microphone has a high impedance and a shotgun microphone with XLR connectors is low impedance. So, in general, low impedance means higher quality and greater expense. High impedance microphones lose recording quality in high frequencies over long cables whereas low impedance do not. It’s important to match impedance between microphones and mixers, cameras, and other audio equipment in order to preserve recording strength. Look at the manuals or product web sites to locate impedance ratings and make sure you go by the measurement in ohms. In general, a low impedance microphone should be paired with an equal or greater impedance. If it is paired with a device that has a lower impedance, signal strength is lost.
Dynamic Range The entire range of sound a microphone can pick up from the quietest to the loudest is known as its dynamic range. Within a microphone’s dynamic range is its signal-to-noise ratio (s/n) and its maximum undistorted recording level. The first measures a microphone’s sensitivity to the noise floor or is the quietest sound that can be recorded above undesirable noise such as hum, static, or hiss. The second is the loudest sound a microphone can record free of distortion. A microphone with 120 dB or more of dynamic range is considered good, but most recording mediums barely have 90 dB of range. Digital recording media has more dynamic range than analog recording media.
Noise Susceptibility Microphones are also susceptible to noise outside of the sound source. Wind noise, low level pops, and magnetic hum are all noise sources a microphone should have built in or augmented support against. When wind enters a microphone it can create irreparable noise if it begins to rattle the diaphragm. No matter how good a microphone claims to block wind noise, it is a good practice to cover a microphone with a zeppelin when shooting inside and with both a zeppelin and wind screen when shooting outdoors. Pop noise are the sudden clicks and pops that occur Chapter 4: Audio | Microphone Characteristics
147
when talent is too close to an unprotected microphone and their voice pops the microphone’s diaphragm. Most microphones come with a wire mesh screen that defeats pop noise. Magnetic hum occurs from fluorescent lights and the best thing to do is to move the microphone as far from the source as possible.
Microphone Form Factors The type of sound you are recording as well as your recording conditions will help in selecting a microphone. Does the subject need to control the positioning of the microphone? In this case, you most likely want a handheld. Does the subject need his hands to be free and to have mobility? The answer here is to use a lavalier or a shotgun on a boom. Will the set or lighting prohibit a boom operator? In this case, a lavalier or hidden shotgun are the best bet. In the following sections the merits and pitfalls of handheld, lavalier, and shotgun microphones are listed. Hand Held Hand held microphones are what most people envision when they think about microphones. This is in part due to their experience watching the evening news or spending too much time at karaoke bars. A hand held microphone usually has an omnidirectional pick up pattern. It doesn’t need to be hand held—you can also mount it on a microphone stand, a camera, or a boom if you have the proper mounting equipment. They are best when a boom operator cannot help, when you cannot place a lavalier microphone on the subject, or when you are “running and gunning it” and do not have the luxury of film-style sound production.
Figure 4: A table top stand and a wireless capsule makes for an easy way to record sound at a conference table.
Ideally a handheld microphone like all other microphones is six to twelve inches from the talent’s mouth and at a 45-degree angle. If the handheld is unidirectional, do not place the microphone closer than six inches to the subject’s mouth or proximity effect will occur.
148
Chapter 4: Audio | Microphone Characteristics
Lavalier Microphone “Lavalier” originally meant a microphone that was worn around a subject’s neck like a lanyard. Today, a lavalier microphone, or “lav” for short, is a small omnidirectional microphone worn on a shirt or tucked and hidden underneath clothing. Since it has an electret condenser design, it can be made incredibly small while being very sensitive and having a full frequency response.
Figure 5: A lavalier microphone
When they are worn on the talent, they favor the talent’s voice over the environment and produce a recording that sounds like the talent is very close to the audience. At times this can sound unnatural depending upon the context and the elements in a shot. Sound checks should be done after putting a lavalier on talent to troubleshoot two potential noise issues: clothing and wind noise. When a lavalier is tucked loosely between two layers of clothing, contact noise occurs from the microphone rubbing against both layers of clothing. To prevent this, place the lavalier between two triangle-shaped pieces of tape where the sticky side is exposed to the outside. The triangle-shaped pieces of tape will secure the lavalier to both layers of clothing. Use additional pieces of tape to secure the lavalier’s wire down to the wireless unit or, if the lavalier is wired, at the point where the lavalier’s wire exits the talent’s clothing. If the lavalier is between skin and clothing, take care to use medical tape to secure it to the talent’s skin.
Chapter 4: Audio | Microphone Characteristics
149
1. Cut out a square piece of tape.
2. Roll a small piece of tape together and place inside.
3. Fold over each side and press against tape inside.
4. Sandwich the lavalier between two triangles and fasten it so it sticks to both layers of clothing. Use additional tape to secure the lavalier’s wire. Figure 6: Steps in securing a lavalier
When worn on the outside, wind noise is picked up if the microphone is not properly shielded. Make sure the wind screen is on the lavalier and if that is not enough invest in a small furry wind screen that will reject the additional wind noise, or buy a pair of woolly gloves for a small child. Make a screen by cutting off the tip of a glove and covering the lavalier with one of these. Other less obvious sources of noise are bad cables and a gain setting that is too high causing the microphone to be too sensitive. They can be wired but they are most often wireless on DV shoots. A wired lavalier connects to an XLR cable that is plugged into a mixing board or recorder. This puts the talent on a leash and decreases their mobility. Take caution when using this method and requiring the actor to move. Either give her plenty of cable, limit movement, switch to a wireless setup, or switch to a shotgun microphone on a boom. In between takes, the actor will want to stretch her feet so make sure that the lavalier can easily be disconnected from the cable running into the mixer or recorder. Since a lavalier microphone is very small, it is easy to conceal from the camera. When securely fastened, it maintains a constant distance to the subject’s mouth and as long as the subject does not change the volume of her voice, the audio mixer’s job becomes easier as he does not have to ride the levels to account for changes in audio levels. A lavalier is great when the subject needs a lot of mobility in front of a large live studio audience, or when the subject needs to use her hands to perform or demonstrate and cannot use a handheld microphone. Stick or Shotgun Microphones The shotgun microphone gets its name from its long barrel shape. Along the sides of a shot gun is a thin opening that cancels out sound from the side of the microphone and allow the microphone to pick up sound in front of the microphone.
150
Chapter 4: Audio | Microphone Characteristics
Figure 7: The Sennheiser 416 Shotgun microphone
Overhead miking with a shotgun microphone is by far the best option if the shooting style allows it. This method produces the most natural mix of sound. Dialog is favored, but ambient sounds and effects are also recorded at appropriate levels.
Figure 8: Overheard miking with two shotgun microphones. Notice that one is placed on a stand.
AA common misconception is that a shotgun microphone is the telephoto lens for sound. While both a telephoto and a shotgun are narrow in the sound that they pick up, a shotgun microphone does not zoom into far away sounds. The narrow range or field of hearing that a shotgun mi-
Chapter 4: Audio | Microphone Characteristics
151
crophone has allows subjects to be farther away as long as the microphone is pointed straight at them. Pointing the microphone straight at a subject’s voice, however, picks up sounds behind him at too high a level and this is the reason overhead miking is preferable to aiming the microphone like a shotgun. Remember to avoid aiming the microphone at hard floors, walls, or ceilings since they reflect background noise into the microphone. As was mentioned earlier, a sound blanket, heavy blanket, or quilt can absorb reverberation and noise. Since a shotgun microphone is sensitive to wind noise, do not sway the microphone when shooting outdoors and always shield the microphone from the wind with a zeppelin and windscreen. When putting the shotgun on a pistol grip, boom, or on a camera, use a shock mount to absorb any additional vibration or noise where the microphone might unexpectedly hit another object.
Figure 9: A pistol grip shock mount can be held or mounted. Use a wind screen when recording outdoors.
Boom Pole A boom pole is a long telescopic rod for positioning a microphone directly above (the convention) or below the talent. Since the boom can extend several feet, the boom operator is out of the frame and out of the talent’s way. To make the boom operator’s job easier, he and the recordist should be involved in discussing how scenes and individual takes are blocked (choreographed). This will help him plan how to move and position the microphone for the best possible recording while avoiding an accident. Operating a boom is both a physically and mentally demanding job. The boom operator is expected to do his job “transparently” by recoeding sound perfectly and by not getting in the way of the talent or the camera. Often he has to do this all day long and has to be consistent in keeping the microphone at the same spot. Appreciate this effort and give him the rest he needs by providing breaks and a back up operator.
152
Chapter 4: Audio | Microphone Characteristics
Figure 10: Hold the boom with both hands straight over your head and make adjustments as necessary.
Handling a Boom Pole After attaching the microphone to the mount and covering it with a zeppelin and windscreen, attach the mount to the end of the boom pole when it is fully collapsed. To extend the boom, fully extend the number of sections needed and then retract each extension an inch or two. Fully extending a section makes it weaker, wears the section out more quickly, and once a section has been weakened it is more susceptible to creating noise. Extend the boom a little longer than is needed and grip it closer to the middle. The end of the boom will counterbalance the weight of microphone and make the boom lighter to hold. Boom poles should be held over the shoulders or at chest height parallel to the flow. To reduce the load, arms should be held straight up
so that body looks like a capital “H.” This keeps the weight on the bones and not on the muscles. Holding a boom and stretching the arms outward from the body (the body appears like an uppercase “Y”) bears more on the weight on the muscles and is more fatiguing. Besides being less tiring, keeping the arms vertical gives the operator more range in motion and allows him to quickly move the mic in and out. The operator wouldn’t want to hold the boom like this for extended periods of time, but it helps when booming quick action or booming in confined spaces. While both arms should be vertical most of the time, each arm has its own purpose. The front arm facing the mic should be used to support the boom at all times while the rear is used to steer the boom towards the sound source.
Shotgun microphones should be placed just above, below, to the left or the right of the subject’s voice. Be careful not to get the mic in frame and do not trust the viewfinder on the video camera. These viewfinders tend to underscan the actual recorded image. To ensure that the mic is not in frame use a field monitor with overscan turned on.
Chapter 4: Audio | Microphone Characteristics
153
Monitoring and Recording Audio Like video, audio needs to be monitored to prevent distortion. Audio levels are monitored on VU meters of the camera or mixer. A volume-unit (VU and pronounced, “vee-you”) meter measures perceived loudness in decibels (dB). The VU meter was designed to have weighted ballistics (dynamic response) that are equivalent to the human ear. This design decision leads to two important things to remember when reading a VU meter: • A VU meter does not measure loudness in real time, but rather it measures average loudness. • 0 dB on a VU meter is not 0 dB, the threshold of human hearing, but the maximum audio level where sound is recorded distortion-free. Since a VU meter measures average loudness, it does not accurately represent sudden spikes in audio. This is partly good as high audio levels are not necessarily bad if they are a little over 0 dB and are short in duration. Audio levels can occasionally go into the red zone, but they should not peak at the top of the meter or recorded audio will be distorted. Likewise, if the levels remain too low, a subtle hiss will become apparent when either boosting the levels during post or increasing the volume of the final mix.
Figure 11: VU meters for a two-channel mixer, the Shure FP-33
Microphone and Line Level Microphone level, or “mic” level, is the reference level for audio originating from microphones. It is normally set lower than line level. Line Level is the reference level originating from mixers, headphone jacks, and other audio equipment. Both the DVX-100 and the XL2 accept both levels. So if you are plugging a microphone directly into the camera, use mic level and if you are attaching a mixer or patching into a mixing board, use line level.
154
Chapter 4: Audio | Monitoring and Recording Audio
Unbalanced and Balanced Lines Unbalanced cable has one shielded conductor cable. The cable is shielded to block electrostatic noise from fluorescent lights and other mechanical sources. Electromagnetic noise, which comes from power lines and certain radio frequencies, is picked up by the single cable and affects the recorded audio. Examples of an unbalanced cable are consumer grade mini, RCA, and the quarter-inch jack. Unbalanced cable doesn’t run long without risking noise and loss of strength in the audio signal. Balanced cable has two shielded conductor cables. The shielded cable easily blocks electrostatic noise. A balanced cable blocks electromagnetic noise because both conductor cables pick up the noise and cancel out. The actual audio that is meant to be recorded simply passes through unaffected. Examples of balanced cable are three-pin XLR cable and some mini and quarter-inch cable. As always, consult technical specifications for the cable if you are unsure. Balanced cable can run long without the fear of picking up noise. Once balanced cable is connected to unbalanced cable, circuit, or adaptor, the entire signal becomes unbalanced.
Using the Camera to Mix Sound With both the DVX-100 and the XL2 it is possible to record the same microphone across two audio channels. For instance, on the DVX-100, plug a microphone into Input 2 and under the LCD screen set the input sources for both tracks to Input 2. Next, set the audio gain a little bit higher for Channel One. Now you have two versions of the same audio track. If Channel One becomes too hot, you can switch to Channel Two until Channel One is at regular levels.
If the mic requires phantom power, turn it on here. On the camera, you need to select the same input for both channels. Make sure that the microphone is plugged into this input. When there is only one microphone available, a good solution is to record this single sound source on both audio channels on the tape and then adjust the levels differently so peaks can be avoided on one track and the other gets a boost when the audio is low.
Two level settings for the same audio.
Figure 12: Mixing sound in the camera
Chapter 4: Audio | Monitoring and Recording Audio
155
Riding the gain is nearly impossible without a mixer. Sure, the shooter or cinematographer can make adjustments to the gain with on-camera controls, but this only works for run-and-gun situations. While shooting projects, it is much more efficient to let the sound recordist handle the sound because the cinematographer needs to tend to the picture. It simply becomes too much to ask a cinematographer to tell the director and crew that the sound is off each time. The recordist cannot touch the camera’s audio level controls while the cinematographer is using the camera. Likewise, the recordist cannot watch the VU meters on the camera while it is in use. A mixer has better level, panning, and mixing controls than a camera and its vu meters are more accurate and easier to read than those on a camera. A mixer almost always has a better pre-amplifier than a camera. In addition, they provide phantom power, which has the side benefit of conserving the camera’s battery level since the camera won’t power the microphones.
Using a Mixer A mixer prevents audio levels from peaking by adjusting the output levels and turning on a limiter. When monitoring audio, the sound recordist listens to the incoming sound while watching the VU meter. Before the VU meter peaks, she lowers the output levels. Conversely, when a desired sound is too faint, she raises the levels. This method, referred to as “riding the gain,” prevents distortion at high audio levels and noise at low audio levels and it should only be done under these extreme conditions. It shouldn’t be done for small to moderate changes in audio levels. While the recordist wants clean audio free of low-level noise and high-level distortion, consistent sound level is crucial for editing different takes of the same scene. In a best case scenario, the recordist knows when the talent will raise or lower his voice and she can make the proper adjustments. On interviews, levels can be set during a sound check and then kept at the level without having to worry about peaking. It is probably best to set the levels a little bit lower for subjects or actors who tend to get more animated once the tape begins to roll since they are more likely to raise their voice and peak the audio meters.
Figure 13: A Shure FP-33 three-channel mixer in a Porta Brace bag.
156
Chapter 4: Audio | Monitoring and Recording Audio
In less optimal situations, the recordist has to ride the gain and turn on a limiter if the mixer has one. A limiter is sort of like a knee circuit for overbright images. It prevents high level audio from peaking by softly clipping it. Clipping has the effect of dulling the sound, but it also keeps sound from becoming distorted. Some mixers also have a low level cutoff that eliminates low level buzzing or hissing from the incoming sound. On a multi-track recording, it is best not to lower temporarily unused microphones all the way down. When someone is not speaking, lower the input levels for his microphone until he can be faintly heard. When he does begin to speak, this will make it easier to ramp up his levels and it will sound more natural.
00;00;20;14
The most basic scenario uses two microphones connected to the camera’s two channels. While this approach is compact, the shooter is also doing the mixer’s job, which can be difficult in demanding situations.
An intermediate scenario uses two or more microphones connected to a mixer. This approach offers greater control over the audio and picture because they are handled separately.
An advanced mixing scenario uses two or more microphones connected to a mixer feeding a digital audio recorder. A digital recorder records at higher sample and bit rates.
Figure 14: Mixer scenarios with and without a recorder
Chapter 4: Audio | Monitoring and Recording Audio
157
Running Sound Checks Running sound checks are important for recording good audio. They involve having the talent talk at normal levels and making adjustments on the mixer so that the audio is recorded free of distortion. Besides checking recording levels, a sound check should be done on all of the recording equipment. Batteries should be fresh, wireless units should be set to the correct frequencies and properly fastened to clothing, and any noise that is not meant to be recorded should
be eliminated or neutralized as much as possible. Noise from generators, equipment, crew members, and even unpredictable location noise from locals, are all avoidable. In some cases you need to ask ahead for permission to turn off things in the set’s surroundings. An assistant director or line producer can enforces the quiet on the set rule for crew members. If there’s someone next door using power tools, ask him to be quiet and be willing to pay him to do so if you are compromising his ability to earn a living.
Recording Production Audio After the rehearsal, sound checks, lighting, staging, and framing are complete, a setup is ready to be shot. In the previous chapter, the workflow for initiating a take is listed. (See Setup Workflow on page 123 for more information.) After each take, the recordist takes notes on the take and gives an okay to the director if it was good. Likewise, the director says “Print,” if she likes a specific take and wants to see it in the dailies. If there is an audio mistake on the take, the recordist tells the director and the take is done again. To reduce this possibility, it is crucial that the assistant director enforce the quiet on the set policy. Regardless of the quality, the recordist notes the take and comments on its sound characteristics as well as its location on the recorder. All of this information helps the sound editor make the best use of the audio. At times, picture from one take and audio from another are used, such as showing reaction shots, an audio take from the other actor is used. Second Sound Setup Second sound is always the norm in film production since the cameras do not record audio. With DV, audio and video are recorded on the same tape. Second sound is still valuable to 24p video productions because it is insurance against tape dropouts and, depending upon the quality of the recorder, can be significantly higher quality and allow for recording more independent audio tracks. Clapper A clapper or slate is used to label each scene on the tape. It is also used for synchronizing picture with sound in post. A slate that takes dry erase markers is fairly inexpensive. Writing numbers on repositionable tape may make assembling the slate for each take easier.
158
Chapter 4: Audio | Monitoring and Recording Audio
Figure 15: Clapper
Sound Recorder There are many viable recording options now with digital technology. Before, the recordist would use a Nagra analog reel recorder. That was supplanted by digital audio tape (DAT), but even that is now slowly being replaced by solid state recorders or hard disk based recorders. Regardless of what format you choose, the decision should be made based on the features you need such as recording format, quality, and the number of tracks recorded. Fostex, Edirol, and now Sound Devices are making some of the most innovative and inexpensive recorders available with features previously limited to ultra high-end recorders.
Chapter 4: Audio | Monitoring and Recording Audio
159
Figure 16: Hard disk recorders (left) are beginning to replace DAT recorders (right) in location audio recording.
Recording Room Tone and Effects Room tone is 30 and preferably 60 seconds of audio recorded on set before sets and equipment are broken down. It is used by the sound editor when doing automatic dialog replacement (ADR) work or when patching over pops and clicks in the sound track. It’s an important step in audio production and should not be overlooked in either narrative or documentary work. Any usable sound effects should also be recorded on set. This may be a door closing or opening, footsteps, slaps, spills, car engines, horns, opening a jar, or anything else the editor might need to make the soundtrack compliment and support the picture.
160
Chapter 4: Audio | Monitoring and Recording Audio
24p Editorial and Postproduction A project runs more smoothly when the editorial and postproduction processes are organized. By vigilantly managing and consistently archiving your assets, you protect yourself from losing valuable work, and you keep your work up to date.
Chapter 5: 24p Editorial and Postproduction | Editing and 24p Postproduction
161
Editing and 24p Postproduction Once the first tape leaves the camera, postproduction often begins. Postproduction begins with editing the picture, revising it until it is “locked,” and then using the locked edit as a reference for sound sweetening, titles, and color correction. This chapter covers the mechanics of postproduction, not the art. There are many books that are dedicated to the art of editing, title design, sound mixing, and visual effects, and the span of this chapter could never cover all this material. In the chapter, several books are listed for learning the art of editing if you are interested in learning more. The mechanics are rarely covered and tend to be learned through apprenticeship, from colleagues, or the rare teacher with real-world production experience. This chapter presents workflows that current filmmakers are following when working with 24p in postproduction, and it provides a basic explanation of how to work with 24p footage in popular editing and effects applications.
A High-Level Overview of Postproduction When I began to write this chapter, I created a rough outline of the workflow most people encounter in postproduction. I ran this by several colleagues who are directors, producers, or filmmakers for their opinion. The following is a brief, high-level view of the tasks involved in post, listed in the sequence most filmmakers follow.
SET EXPECTATIONS Often the sound department will want a rough edit to begin work. Remind them that a rough edit is not final.
Sound Exploration Acquire Footage
Edit
Final Sound Mix
Visual Effects
TITLES & COLOR CORRECTION Apply titles after color correction. This will ensure that colors used for the titles are not affected by the color correction.
Color Correction
Titles
Mastering and Output
LOCK PICTURE! Ideally the picture is locked before going to visual effects. The reality is that you won‘t know how the edit works until the visual effects are done. But be forewarned: a common mistake made by filmmakers in post is managing a moving target—the edit. A final cut becomes a not-so-final cut because the indie director has access to the same tools and is left alone at different stages of the project to make changes to everything. Soon the sound editor’s cut is off, comps need to be redone, or the latest version of the edit is unknown. Your best bet is to continually back up projects, track assets, stick to milestones, and refuse the temptation to tweak the edit all the way until the end.
Figure 1: A high-level view of the postproduction workflow
• Acquire footage: This is the laborious process of importing and organizing footage from the camera, film and video archives, and stock houses into a project file inside a editing program such as Apple Final Cut Pro or Adobe Premiere Pro. • Edit: This is the where a story is constructed from the footage acquired. The first step is to create a first assembly, or rough cut, and improve the edit by showing the edit and incorporating feed-
162
Chapter 5: 24p Editorial and Postproduction | Editing and 24p Postproduction
back. The edit progresses from a rough cut to a fine cut and then a final cut where it is then sent out for sound editing, titles, and color correction. • Create visual effects: These augment existing shots and are done during the edit and need to be completed before the edit is locked. A visual effect can be created entirely through computer generated imagery (CGI) or it can be created by compositing live-action footage with CGI. Visual effects are created when action cannot be performed by actors, is too dangerous, expensive, or impossible to shoot. • Design sound: Sound, like the video image, needs to be adjusted for optimal playback and enhanced through mixing, music, and sound effects to better tell the film’s story. • Create titles: Titles are the text that appears as opening and ending credits. Titles also include text that appear in the film to indicate location or scene changes. • Perform color correction: Color correction is the process of adjusting the picture brightness, hue, or contrast. It is most frequently done to improve the image quality, known as primary color correction. Correcting underexposed footage is one example. Color correction has also come to mean artistically manipulating the picture to replicate the look created by either film development processes, in camera effects, or lens filters, known as secondary color correction. • Master the final edit: The term mastering loosely refers to the process of preparing the footage for the target distribution medium—whether it is DVD, broadcast, the Internet, or to film. Each distribution medium has its own set of production checklists to go through before it is ready to be distributed.
Postproduction Roles The director and producer will set through out the postproduction process and be involved at a hands-on level with the edit. They collaborate heavily with the editor, FX artist or supervisor, and sound editor. On larger-budget projects, there will be an assistant editor, one or two visual effects artists, and perhaps a music composer in addition to a sound editor. If you are financially challenged, networking with other filmmakers can get you a lot of favors. • Editor: The editor helps shape the story from the footage that is shot, her personal notes on the script and her collaboration with the director. Helping the editor may be one or a number of assistant editors who help wrangle tapes and dailies between the editorial and other postproduction departments and who help with project management. • FX artist: An FX artist completes visual effects shots for the film. They often do the bulk of their work in a 3D applications such as Maya, LightWave 3D, Electric Image, or Cinema 4D and in a compositing application such as After Effects, Combustion, Shake, or Digital Fusion. • Sound editor: The sound editor takes the final cut and improves the sound levels, adds sound effects such as footsteps or closing doors, and adds background music. • Video online operator: This person is responsible for making sure every frame in the final edit is properly adjusted for broadcast. Adjusting colors to be broadcast-safe takes up most of his time. • Film finishing technician: This person takes the final edit and color-corrects each shot, and takes video-sized footage up to film resolution before the footage is recorded to film.
Chapter 5: 24p Editorial and Postproduction | Editing and 24p Postproduction
163
Learning the Craft of Editing If you want to learn how to be a good editor but cannot afford film school, an alternative approach is to learn the technical skills and apprentice with a skilled editor. An assistant editor used to be hired on their organizational skills, but some say they are hired today on their technical skills. I would argue that both skills are needed. Having technical skills means you know how to drive a high-powered editing workstation, do some computer trouble-shooting, and perhaps have Photoshop and basic Excel or FileMaker skills to boot. If you are not highly organized, now is the time to learn. Experienced editors are always looking for someone who can meticulously file and label production notes and source tapes and prepare photo story boards from master clips. Multitasking is also crucial, as you’re apt to be capturing footage on one machine, burning DVD dailies on a second system, and doing some editing or effects on a third. If you do accept a job as an assistant editor or as an intern, make sure you get time to work
closely with the editor. On some projects, the assistants will toil at night capturing and preparing footage for the editor to work with in the morning. If you work on opposite schedules, there is little time to learn the craft from the editor first-hand. Here are a few books on the craft of editing. They are more on the aesthetic side, but if you cannot get a job as an assistant editor to someone who can really teach you how to edit, read these and watch the films mentioned in these books. •
In the Blink of an Eye by Walter Murch.
•
The Conversations: Walter Murch and the Art of Editing Film by Michael Ondaatje.
•
Behind the Seen: How Walter Murch Edited Cold Mountain Using Apple’s Final Cut Pro and What this Means for Cinema by Charles Koppelman.
•
Nonlinear Editing: Storytelling, Aesthetics, & Craft by Bryce Button.
•
The Eye is Quicker by Richard D. Pepperman.
•
Grammar of the Edit by Roy Thompson.
Editorial Strategies Every editor has their own organizational methods, habits, and preferences. The method you choose greatly depends upon your resources, your personal work habits, and the length of your project. The following sections cover a few common workflow practices for editing narrative or documentary films.
Before Logging Begins, Label Every Tape The editorial workflow begins immediately after a shoot is finished by labeling each tape with a short name meant for organization and a brief description for identification. Some filmmakers will label tapes according to the project name with at least a three-digit extension. For example, a tape from a film on San Francisco’s Golden Gate Park would be labeled “sfggp_001.” Let’s say that this tape was used for shooting a concert in Stern Grove, a performance space for concerts in Golden Gate Park. The tape would be given a short description such as “Stern Grove jazz concert: performance.” If more than one tape was used in this shoot, the description would then be “Stern Grove jazz concert: performance (tape 1 of 3).” The second tape would be labeled “sfggp_002” and its description would be “Stern Grove jazz concert: interviews (tape 2 of 3).” The third tape would be labeled “sfggp_002” and its description would be “Stern Grove jazz concert: cutaways (tape 3 of 3).”
164
Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
Don’t mix projects on a single tape. For example, don’t use the same tape to shoot a wedding and a corporate event. If you do, labelling, organization, and retrieval become complicated. When you log clips from a tape, use the tape’s unique name for the clips’ “reel” name, a clip attribute for identifying the clip’s source tape. If you do not keep the reel names consistent across clips that originate from the same tape, it will be difficult and confusing to recapture the footage when a drive fails or when you are preparing your project for a online editing session for broadcast or a tape-to-film conversion. When the tape names match between the Project window and what is written on the tape, sorting, navigating, and searching a complex project also become easier.
Use the same name through production and post for tapes By using the same label, clip organization and re-capturing material are easier.
Figure 2: Consistent naming makes locating and recapturing footage a breeze.
Logging Styles When capturing media, you can choose to capture individual clips from tape, or capture the entire tape and later break it into smaller clips. A single, long capture produces less wear and tear on a camera’s or deck’s transport mechanism and gives you access to all of the material. Longformat capture, however, requires a lot of space to store the material—you’re looking at 9 GB for each hour of DV compressed footage. With long captures, the process of identifying the material you want to use happens after the capture. Typically you create virtual clips for good material that reference portions of the long clip. If you wish to later consolidate the project to use only the media used, programs such as Final Cut Pro and Premiere Pro can both remove unused footage from a project. Long captures work well when you have the storage space or have a limited amount of time to work with the source tapes. If you had someone on set during production to immediately capture footage as it was being shot, this would allow the editor to view the footage on her desktop, and she could begin marking takes that were good and creating a first assembly.
Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
165
LONG CAPTURE WORKFLOW
Capture Entire Tapes
Return to Master Clips If other takes are desired, the editor returns to the master clips and creates new subclips.
Review and Subclip from Master Clips Enter Log Notes
Edit Assembly
Rough
Fine
Final
Media Manage
Putting Projects on a Diet By distilling the project down to only the media in use, projects are become easier to exchange and archive. Most NLEs have this feature.
SHORT CAPTURE WORKFLOW
Capture Short Clips Enter Log Notes
Return to Capture If other takes are desired, the editor returns to tapes to log and capture.
Edit Assembly
Rough
Fine
Final
Media Manage
Putting Projects on a Diet Media managing a project that used short captures can still help by removing clips, music, and stills that were not used.
Figure 3: Long and short capture workflows
Conversely, capturing short clips is more time-intensive initially because you play, rewind, forward and scrub footage as you log each clip. Once everything is identified, you then batch capture everything from this tape. If you need to use another take or want something else from the tape, you insert the tape and repeat the process. Short captures work incredibly well when you have a small amount of source footage and you know what you need to capture. Short captures are also obviously good when you have limited drive space. If you have sent your tapes out for dubbing and have had them transcribed, which is discussed later in this chapter, short captures are obviously a good way to go since you can create a paper edit from the window dubs in a spreadsheet program such as Excel and then import the timecode values into an NLE for batch capture.
Naming and Organizing Bins When creating a new project, I begin by creating a top-level bin named “source” for the source tapes. When I use more than one tape on a shoot, I create a bin for that and put all the source master clips, usually long captures of a single tape, in that bin. For example, with the three-tape shoot in Stern Grove, I would create a bin named, “Stern Grove (001-003).” The numbers in parentheses tell me that this shoot had three tapes. If there are timecode breaks, I capture clips as long as the timecode breaks allow and save these clips using the same base name but with an alphabetical extension. For example, if tape 2 from the shoot in Golden Gate Park resulted in three clips caused by timecode breaks, the master clips would be named: “sfggp_002a,” “sfggp_002b,” and “sfggp_002c.” 166
Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
After source bins have been created but before the material has been reviewed, I create another top-level bin named “selects” for documentary projects or top-level bins representing the screenplay’s scenes for narrative projects. Selects are thematic. Recalling the shoot in Golden Gate Park, I might have a selects bin on gardens, fields, history, play grounds, and concerts. These themes are usually driven by the documentary’s project proposal or treatment. As the documentary story line develops, additional select bins might be created as the documentary covers new material. Narrative projects are far more cut and dry since the material is organized according to the script. Often I will also have additional bins for audio, graphics, stills, lower-thirds, animation and visual effects. Depending upon the number of shots that do not originate from video, there might be a simple “Other” bin containing a few non-video files, or there might be several unique bins that contain specific kinds of media.
2 1
NARRATIVE PROJECT This narrative project uses a mechanical style to project organization. Clips are given short alphanumeric names that refer to the scene, take, and angle. Labels are used to indicate which clips are the best take. The clips in this project are sub-clips (notice how the icon looks like a torn movie clip) from a master source clip.
2
1 DOCUMENTARY PROJECT Since there is so much more material, documentary projects will have a lot more bins than narrative work. This project uses a descriptive approach to naming clips and has also captured individual clips from the source tapes.
Figure 4: Project and bin organization
Naming Clips The styles can descriptive or mechanical or a combination of these two styles. The descriptive style emphasizes names that describe the clip, and the mechanical style assigns a code to each clip and is usually built from the shot list. A descriptive clip name might be “wide-shot-markafter-meeting,” and the mechanical name would be “S001T03A,” which stands for the third take Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
167
in the first scene using camera A. Descriptive styles work very well for documentaries where the shots are not planned, but a mechanical style could be applied with some effort. When working on a narrative project such as a short or full length feature, the mechanical approach is best since it’s very easy to establish a one-to-one relation between the shot list, the continuity supervisor’s notes, and the footage shot. The approach that I recommend is to adopt practices from both methods by using the asset management and metadata features of your editing package. The Final Cut Pro Browser window and Premiere Pro’s Project window both offer many additional columns to store editing notes and clip characteristics. Regardless of whether or not the clip name is descriptive or mechanical, I make use of these additional data fields to store additional information.
Archiving Work The first thing that should be kept in a safe but accessible place are the physical source tapes. Some editors like to keep these in racks made for DV tape. Others keep them in a box. The main thing is to make sure all the tapes are labeled correctly before they go into storage and that they’re not spread among books, audio CDs and last month’s DV magazine, but kept separately. Like any other precious material or gear, your source tapes should be stored safely and organized so you can quickly return to them. Window dubs on VHS tape or DVD should also be organized and kept handy. If a client or colleague has the source tapes, remind them to keep the tapes organized and safe from heat and moisture and ask for dubs of the tapes if you will need to frequently use them or you want your own backup. Likewise the other physical artifacts of preproduction, production, and post should be looked after. This is a great task for an intern or assistant editor. Documents such as scripts, storyboards, shot lists, continuity notes, cast and crew contact lists, and sound recordist notes should be accessible and organized so the editor can reference them as needed. Last, and certainly most important, are the project files and other digital assets used in postproduction. It’s just too easy to have a drive fail, project files become corrupted, or media files accidently erased. At a bare minimum, I back up all the project files as well as all still images and audio files. Anything that came from tape can be recaptured, but you can back these files up too if you have space. Backing up media files can be simplified greatly by using the Media Manager in Final Cut Pro or the Project Manager in Premiere Pro to remove unused footage from a project. I typically run a complete backup once a week for everything except large media and render files. Doing a backup every night, however, wouldn’t be a bad idea. The following table contains a list of popular backup solutions for the individual or small workgroup. Table 1: Software backup solutions
Product
Web Page URL
Apple Backup
Mac OS
www.mac.com (requires a .Mac account)
Norton Ghost
Windows
www.symantec.com/ghost/
2
168
Platform(s)
EMC Dantz Retrospect
Mac OS and Windows
www.dantz.com
Roxio Déjà Vu
Mac OS
www.roxio.com/en/products/toast/
Roxio BackUp MyPC
Windows
www.roxio.com/en/products/bump/
Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
Collaboration While there are products that help ease the pain for workflow collaboration for digital video and film projects, there is really no panacea, and most solutions are prohibitively expensive for groups of low-budget filmmakers. If you cannot afford a dedicated, high-capacity storage area network (SAN) and asset management software, here are some suggestions for creating your own homegrown solution: • Decide upon software. Make sure that everyone who will be working on the same files has the same software versions. It’s too easy to lose a day of work because someone has an earlier version of a product and cannot open the files you’ve just sent over. If this is not possible, look into saving a version that is readable by an earlier version. There can also be inconsistencies between versions for software plug-ins. Or better yet, you use a plug-in that a collaborator does not have on their system. • Pick file formats. This is important for maintaining consistency and workflow efficiency. While you can work with any file format in most situations, it helps reduce confusion and frequent incompatibilities if you standardize upon a set of image file formats and video codecs early on. • Devise a file-naming strategy to identify content, versions, and ownership. When everyone follows the same strategy it makes it easier to identify who worked on a file last and what is the most current version. When in doubt, always view the information for a file and ask everyone to keep their computer time synchronized with an Internet time server so dates and times are consistent. Ideally, you shouldn’t pass important project files around in between major milestones such as fine cut and final cut, but in the case you do, pad file names with numeric extensions and add your initials to the file name. For example, “city_movie_008_js” would be the eighth version of the project that I, John Skidgel, worked on. • Choose a method for sharing files. If you have a dedicated network, stick to a single server for exchanging files. If everyone has the same media locally, an Internet storage site such as .Mac can be an effective way to share project files. • Agree on a process for collaboration, completing work, and communicating changes. This means sticking to a schedule, adhering to milestones, and respecting the decisions of the person with creative control and ownership. • Encourage everyone to archive work frequently. Again, if everyone keeps track of their work and backs their files up, the worst situations can be prevented and when work is lost, it can be rendered, updated, or recreated without too much pain.
Narrative Editorial Strategies During Production With narrative work, it’s in your best interest to have the editor on the set, digitizing tapes as they are shot, and screening them quickly to ensure you are getting what you need for postproduction. When the editor is on set, he will offer suggestions for additional cutaways or setups that can serve as alternatives for the edit. These suggestions become even more crucial if storyboards were not done or were done without much thought. Having the editor on set will also get him acquainted with the footage. He’ll take notes of what takes were good, and it will make his job easier.
Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
169
By quickly logging and capturing takes, the editor can begin to make first assemblies for a scene and see if the takes are good and if additional material can be shot. It’s a lot easier to do an additional setup during production than having to do a reshoot in the middle of postproduction. Screening dailies or first assemblies can be good for the director to see for the same reason. If he isn’t getting what he needs, the team can quickly do a reshoot. Dailies can also help maintain continuity. By reviewing them against continuity notes, you are sure to catch anything that could affect the film’s continuity. DIGITIZE ON THE SET In addition to monitoring the signal, Dave batch converts the 720p material to 24p DV resolution files as it is shot and hands off to Anthony.
HAVE THE EDITOR ON THE SET Anthony, the editor, watches the shot on an HD production monitor and takes notes for the edit.
DISCUSS THE DAILIES Once dailies have been shown, the editor discusses the footage with the production team and the director decides if a reshoot is required.
BEGIN THE EDIT ON SET With DV footage transferred to a portable hard disk, Anthony begins the edit. Later, the edit will be reconnected to the HD footage for final mastering.
Figure 5: Working with dailies with a short film shot in HD
Documentary Editorial Strategies With documentaries, it is not uncommon to have hundreds of hours of footage. For a featurelength documentary film, this equates to a shooting ratio of over 100:1. This means only 10 to 15 percent of the footage shot will be used. Digesting all this footage and selecting the gems may sound like a insurmountable task, but it isn’t if you follow the workflow many documentary producers have followed since before the days of desktop editing: • Have VHS or DVD dubs made of source tapes immediately and have timecode “burned” into the frame. The sooner you do this, the sooner the dubs can be watched for good material. • Send the dubs out for transcription. While it is costly, it will take you a lot less time to read a video transcript than to watch hours of video. Expect to pay $30 to $100 per hour of footage. If you have a budget or are seeking grant money, transcription services are a great investment and should be included in your production costs. This is also a potential job for the organized and highly motivated intern or assistant editor. • Read the transcription, mark what you like, and then watch it on the tape. If you really like it, highlight the transcription and make notes in the margins. Note the timecode and enter this into a spreadsheet. Pad the in and out points of each good clip. Old-school editors will create copies of the printed transcription and create a paper edit by cutting and pasting the desired footage 170
Chapter 5: 24p Editorial and Postproduction | Editorial Strategies
together into a printed narrative. Today, the transcription can be delivered digitally, and it’s even easier to copy and paste the timecode values as well as transcription into a NLE to serve as highly descriptive log notes. • Import the spreadsheet into an NLE and batch capture the material. Organize and set up your bins and begin editing.
Figure 6: Reviewing window dubs and video transcription
Acquiring and Interpreting 24p Material Most popular non-linear editing systems now support the 24p modes offered by the Panasonic and Canon cameras. When you shoot 24p standard, it’s as if you shot on film and had the footage telecined to a DV tape, so shooting this method means that you want the film look but you’re editing a project slated for broadcast or standard video playback. Misconceptions on Working with 24p •
A normal DV VTR is fine for capturing 24p and 24pa footage from the DVX-100 and the XL2. Both cameras capture at 23.976 and apply pull down in camera, so the resulting footage is recorded to tape at 29.97. When you capture the footage, the NLE can be instructed to revert back to the original footage using the recording mode’s pulldown cadence (2:3 for 24p and 2:3:3:2 for 24pa).
•
You can’t print back to tape at 24p or 24pa because the footage is really 29.97 with pulldown applied.
•
24p is only for film outs. No, it has several other advantages. Once you capture the footage you can edit in a 29.97 timeline for compatibility and the film look, or you can edit in a 23.976 timeline for a progressive DVD, for streaming video or CD-ROM.
Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
171
Editing 24p Standard Material When shooting 24p standard, you capture and edit at 29.97 in Final Cut Pro. While this sounds incorrect, recall that both 24p recording modes actually lay the footage at 29.97 to tape after applying pulldown. If you were to capture at 23.976, you’d be throwing frames away. You edit at 29.97 because 24p standard mode is meant for those who want the timing and look of film but are aiming for NTSC distribution such as network broadcast or VHS tape. It also works with existing video footage that is 29.97 because it has been converted to 29.97 in camera. Another way to look at 24p standard is to say that it is akin to working with telecined film footage. While the original footage was shot at 24, transferring it to video makes it compatible with standard NTSC video and NTSC broadcast standards. If you accidentally shot in 24p standard mode but want to edit at 23.976, you have to remove the pulldown first before placing it in a sequence running 23.976. Use After Effects’s Interpret Footage dialog or Cinema Tools for this task. In Premiere Pro, 24p standard material has to be edited at 23.976 because it automatically removes pull-down from 24p standard footage. This means you cannot get the film look by shooting in 24p standard and editing at 29.97.
Editing 24p Advanced Material Advanced footage should also always be captured at 29.97. It also is laid down to tape at 29.97. It preserves the original frames using the 2:3:3:2 pulldown and flags the original frames inside the DV transport stream. An NLE reads this stream and reconstructs the original frames. If you capture at 23.976 you are throwing out frames and the valuable metadata that’s needed to recreate the 23.976 time base. You edit at 23.976 because once the pulldown is removed, the media is 23.976. While you can edit 24p advanced footage in a 29.97 timeline, it’s not recommended. If you capture 24p advanced footage and don’t remove the pulldown, the footage appears to stutter because the third frame is repeated in order to preserve the original frames with the confines of interlaced 29.97 video. While this repeated frame is discarded when you remove pulldown, it remains when you don’t and is noticeably jarring. If you accidentally shot in advanced mode but need compatibility with standard NTSC video, capture the footage and remove the advanced pulldown and then apply standard 2:3 pulldown which converts it back to 29.97. The following sections cover the settings to capture and interpret 24p properly in Final Cut Pro and Premiere Pro. In both cases it’s as easy as picking a project preset. If you shoot anamorphic, however, you need to create the presets yourself in Final Cut Pro, which can be done fairly easily once you learn all the rocks to look under.
Final Cut Pro Final Cut introduced support for NTSC 24p material in version 4. Prior to that, pulldown was removed using Cinema Tools or DV Filmmaker. The quickest way to begin working with 24p footage within Final Cut is to make use of its Easy Setups feature. An Easy Setup keeps track of 172
Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
the default capture, sequence, device control, and video and audio playback settings. The two presets that matter the most when shooting in 24p are the capture and sequence presets. You select an Easy Setup to use by choosing Final Cut Pro>Easy Setup). To create an easy setup, choose Final Cut Pro>Audio/Video Settings. If you are working with 24p standard material, you are most likely going to want to select the DV-NTSC Easy Setup. If you are shooting 24p advanced, be sure the Show All box in the Easy Setup dialog is checked, and select DV-NTSC 24p (23.98) Advanced Pulldown Removal. Easy Setups This dialog is for choosing a setup for the current project. A setup is collection of several application presets for capture and display.
Setup Menu Expanded A seperator bar would help make this more clear, but any custom setups that you create will appear at the top (usually above the first Cinema Tools setup. In this example, I have a custom preset named 24pa Anamorphic DV25 NTSC to make up for the lack of a 24p advanced preset for anamorphic video.
Advanced Pulldown Removal Setup This is the preset you should use for 4x3 24p advanced DV25 NTSC footage.
Setup Menu
1 This dialog is where you choose a setup to applied to the current Final Cut Pro project.
Figure 7: Final Cut Pro’s East Setup dialog.
Capture Presets Capture presets are the settings used by Final Cut for capturing media. In the Audio/Video Settings dialog, click the Capture Presets tab to select a capture preset for the current Easy Setup or to create and edit new capture presets. If you do create a customized preset and are shooting 24p advanced, make sure to turn on Remove Advanced Pulldown (2:3:3:2) from DV-25 and DV-50 Sources in the Capture Presets Editor dialog.
Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
173
1 Audio/Video Settings
This tabbed dialog is where you choose select and edit the presets used in an Easy Setup.
Capture Preset Editor Settings for the current capture preset are specified in this dialog.
3
Capture Presets Tab This lists all the capture presets in a project. A check indicates the current capture preset for the current easy 2 setup.
Figure 8: Capture Presets tab and the Capture Preset Editor dialog
Sequence Presets Sequence presets are the default settings for newly created sequences in your Final Cut Pro project. In the Audio/Video Settings dialog, click the Sequence Presets tab to select a sequence preset for the current Easy Setup or to create and edit new sequence presets. With sequence presets the editing timebase is the most important. If you are shooting advanced, the time base should be 23.98. If you happen to place 24p advanced footage in a 60i sequence, you should automatically get render bars because Final Cut needs to render the video and audio to conform to the new time base.
174
Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
1 Audio/Video Settings
This tabbed dialog is where you choose select and edit the presets used in an Easy Setup.
Sequence Preset Editor Settings for the current capture preset are specified in this dialog.
3
Sequence Presets Tab This lists all the sequence presets in a project. A check indicates the current sequence preset for the 2 current easy setup.
Figure 9: Sequence Presets tab and the Sequence Preset Editor dialog
Editing Anamorphic and Shooting Advanced If you shoot anamorphic footage, you’ll need to modify the capture and sequence presets by turning on the anamorphic setting. Given Apple added support for 24p advanced in version 4, they still haven’t thrown in a preset for folks shooting 24p advanced with an anamorphic adapter or a camera with a native 16:9 CCD such as the XL2. If are shooting 24p advanced anamorphic footage and using Final Cut Pro, you will need to create your own easy setup. 1. Open the Audio/Video Settings dialog. Select the Sequence Presets tab and duplicate the “DV-
NTSC 24p (23.98) Advanced Pulldown Removal” sequence preset. 2. Click Edit… Check the Anamorphic 16:9 setting and click OK. 3. Select the Capture Presets tab and duplicate the “DV-NTSC 24p (23.98) Advanced Pulldown
Removal” capture preset. 4. Click Edit… Check the Anamorphic 16:9 setting and click OK. 5. Select the Summary tab and choose the new sequence and capture presets you just created. 6. Click Create Easy Setup... and name the new setup, “DV NTSC 48 kHz 23.98 Anamorphic” 7. Use this Easy Setup whenever creating a 24p advanced anamorphic project. Choose Final Cut >
Easy Setups... and select it from the drop down menu.
Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
175
When shooting anamorphic, check the Anamorphic 16:9 setting in both preset editor dialogs. Figure 10: The Capture Preset Editor and Sequence Preset Editor dialogs have an option for 16:9 material
If you forget to check the anamorphic setting, it is very easy (but potentially tedious if many items need the correction) to set the anamorphic flag on footage and sequences by checking the anamorphic column in the Browser window. If you don’t see the column, you can scroll horizontally until you see it and then you can drag it to the left to make it more easily accessible. I tend to do this and then I create a column layout by Control-clicking on a column heading and choosing Save Column Layout.
When the anamorphic flag is not set, anamorphic footage will look squashed in the viewer.
Checking it will fix it quickly.
Figure 11: Reviewing sequence and asset anamorphic settings
Playing Back Video Final Cut Pro has three 24p playback settings when previewing 23.976 material through FireWire preview: 2:3:2:3, 2:3:3:2, and 2:2:2:4. All three settings indicate how Final Cut reinserts 176
Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
pulldown for presenting a sequence with 23.976 or 24fps material on an NTSC preview monitor. 2:3:2:3 uses a 2:3 pulldown pattern, and has the least amount of temporal artifacts or jitter. It requires more processing power than the other two methods. 2:3:3:2 has the same pull down as 24p advanced material. As a result, there is a slight stutter. 2:2:2:4 has fairly noticeable jitter as it repeats an entire frame to ensure real-time playback, and it is the least processor-intensive. When considering how smooth you want the 24p material to playback, you need to factor in the number of real-time effects and simultaneous video and audio streams.
The default pulldown pattern that is applied when previewing to an NTSC monitor through firewire is set in the Playback Control tab in the System Settings dialog. Final Cut Pro > System Settings…
The pulldown pattern can be changed in the RT menu in the Sequence window.
Figure 12: Pulldown pattern settings for previewing over Firewire
If you’re not seeing the Pulldown Pattern options in the RT menu, make sure View > External Video > All Frames is checked (Press Command+F12).
Premiere Pro Premiere Pro added capture support for NTSC 24p video in 1.5. Prior to 1.5, you could create a timeline with a time base of 23.976 and edit with 24p material created in another program such as DVFilmmaker, but you couldn’t capture and remove the pulldown from DVX100 or XL2 footage. Project Presets Project presets in Premiere Pro are simpler. Settings for capture and sequence settings are in one dialog. Unlike Final Cut, Premiere Pro does include a project preset for 24p NTSC footage that was shot with an anamorphic adapter. This saves you from having to customize a preset with the anamorphic flag set. While these presets are named, Panasonic 24p, they work equally as well with the Canon XL2. The reason Premiere Pro has only preset for 24p NTSC footage is because it automatically determines if the footage was shot in advanced or standard and removes pulldown, regardless of whether you wanted it removed or not. The downside to this that you cannot use 24p standard footage in a 60i project. Chapter 5: 24p Editorial and Postproduction | Acquiring and Interpreting 24p Material
177
This folder contains the presets for projects using 24p footage. They work equally well with DVX-100 and XL2 footage. Note that Standard in the preset name does not refer to 24p standard mode, but to 4:3 standard aspect ratio.
Should you want to create you own variation on the 24p presets, click the Custom Settings tab, make your changes, and click Save Preset.
Figure 13: In Premiere Pro, it’s very simple to select a project preset for 24p footage shot in 4:3 or 16:9
Playing Back Video When Premiere Pro exports movie files, it exports using the same pulldown as 24p advanced, 2:3:3:2. This has a little more temporal jitter than 2:3:2:3, but is less processor intensive. It has another option labelled ABBCD which repeats a frame when previewing with Firewire.
Should you want to create you own variation on the 24p presets, click the Custom Settings tab, make your changes, and click Save Preset.
Figure 14: In Premiere Pro, the DV Playback settings have options for displaying 24p on an NTSC monitor
178
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
Converting Interlaced Video to 24p There are several options available to you on the desktop if you need to convert 60i footage to resemble film. This may seem like an unlikely scenario if you regularly shoot with the DVX100 or XL2, but if you receive archival footage that is interlaced, or someone has given you great content shot with a 60i DV camera, you may need to convert this material to 23.976. You may also want the film look but you’re restricted to shooting and working in interlaced for a particular project. I’m going to cover both the pipeline for converting 60i (NTSC video) or 50i (PAL video) to 24p as well as the more simple process of giving interlaced footage a film look.
SHOOT
CAPTURE
EDIT
POST
MASTER
Filmmaker pursues entire video-to-film look workflow Chroma-keying and compositing CG elements occur here.
Prepare for film-out per Lab’s instructions
Shoot 60i Shoot 24p advanced
Capture at 29.97 fps Remove 2:3:3:2 pulldown
Shoot 24p standard
Remove 2:3 pulldown
Edit at 29.97 fps
Retime to 23.976 fps progressive
Deartifact
Color correct and add effects for “Film look”
Edit at 23.976 fps
Convert to 1080p master Encode for progressive scan DVD at 23.976 fps. Re-introduce 3:2 pull down for broadcast at NTSC
Final Cut users need to remove the 2:3 standard pulldown in an application such as Cinema Tools or After Effects.
Filmmaker pursues partial film look workflow
Shoot 60i
Capture at 29.97 fps
Edit at 29.97 fps
Shoot 24p standard
Color correct and add effects for “Film look”
Leave as-is for broadcast at NTSC
Figure 15: 24p and 60i to 24p workflows compared
The pipeline for converting material from to 24p usually involves the following steps: 1. Retiming to 24p. This process transforms 29.97 fps interlaced footage to 23.976 fps progressive
footage. Normal deinterlacing algorithms make up for the missing fields by duplicating them. This usually results in either a fuzzy image or an image with sharp diagonal edges. By contrast, film-look software employs a combination of motion estimation and pattern matching to retime the footage to 23.976 fps progressive. 2. Deartifacting. This process intelligently smooths the two chroma channels, Cb and Cr because
these chroma channels have one quarter the information that the luminance channel has. In most cases, this is done by using the luminance channel as a guide for smoothing the edges in the other two channels.
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
179
3. Applying a look. This is the process of taking footage and applying a creative look that simulates
film processing effects. When video has been shot correctly and has a lot of latitude, it can be manipulated and given a look. Look at recent films such as The Matrix, Amelie, or Three Kings and notice the colorization and contrast in the image. Rendering looks can take a lot of time. For this reason, I strongly suggest experimenting with still frames for each shot when developing the looks for a project. When complete, create presets and apply them to the edit once the picture is locked. The path you take will vary upon what material you have and what needs to be done to it. Below are common workflows that use one, two, or all stages of the pipeline: • Converting interlaced to 24p and de-artifacting: In this case, your footage is interlaced PAL or NTSC video and needs to be retimed, de-interlaced, and deartifacted. This stage improves chroma keying, is a valuable step before compressing video for streaming or DVD, and preps the footage before sending it to a lab for upconverting to HD or a film-out. This workflow is covered in this chapter using Film Effects in Final Cut Pro and in the following chapter using Magic Bullet Suite in After Effects. • Applying a film-look to interlaced video: In this workflow you want to keep the frame rate at 60i or 50i and just want to alter the contrast range and image characteristics to have a more film look. This is straight forward as applying any filter or effect to a clip and adjusting the settings to your taste. This workflow is covered in this chapter using Magic Bullet Editors or Nattress Film Effects in Final Cut Pro and Magic Bullet Editors in Premiere Pro. Two additional workflows that are of interest to the filmmaker already shooting in 24p are: • De-artifacting 24p video: This assumes you have shot 24p advanced and perhaps want to clean the edges up for a chroma-key shot. Deartifacted video almost always looks a little better and you may want the enhancement for a progressive scan DVD, a film-out, or web video. Since the footage was shot 24p you don’t need to retime the footage or make it progressive. This workflow is covered in this chapter using Nattress’s G Nicer in Apple Final Cut Pro. If you are using Final Cut and After Effects or Premiere Pro and After Effects, these workflows are discussed in the next chapter. • De-artifacting 24p video and applying a film look: This workflow takes the previous workflow one step further by color correcting the footage to have a film-like contrast ratio or to emulate a film developing process such bleach bypass. This is covered in this chapter using Nattress Film Effects. If you are using Final Cut Pro and After Effects or Premiere Pro and After Effects, these workflows are discussed in the next chapter. Converting HDV Footage You can also convert interlaced NTSC (60i) or PAL (50i) HDV to 23.976 frames per second progressive video. New professional HDV models from Sony and Canon allow shooting in PAL.
180
If you have such a camera, shooting HDV PAL offers you a timebase (50 interlaced fields is perceptually 25 frames per second) that is closer to 24p. As a result, PAL’s motion characteristics are more similar to 24p and hence produces better results.
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
2 DE-INTERLACE & RETIME Once de-interlaced, there is an apparent sharpness due to low chroma sampling and DV compression.
4 1 For each second, 60 interlaced fields are converted to 24 progressive frames.
3
APPLY LOOK Finally the footage is graded, or given a unique color treatment that suits the story’s mood and the vision of the director and cinematographer.
DE-ARTIFACT De-artifacting removes the sharpness along edges and makes the image look smoother.
Figure 16: The stages in making 60i look like film
Magic Bullet Editors Magic Bullet began an After Effects-only suite of plug-ins for retiming, deartifacting, and applying looks and film like effects. Originally developed at the Orphanage, a leading visual effects company, Magic Bullet is now sold, distributed, and supported by Red Giant Software. In this section, I cover Magic Bullet Editors for Final Cut and Premiere Pro. The Editors version has the Look Suite for applying creative color adjustments, and it has Misfire, which is a collection of effects that emulate the many ways film can be damaged for an aesthetic effect. Unlike Magic Bullet Suite, Magic Bullet Editors does not include the retiming, deinterlacing and deartifacting technology. Following this section, I’ll cover exporting an edit from Final Cut Pro or Premiere Pro into After Effects, where you will see the full Magic Bullet Suite in action. You can download a demo version of Magic Bullet Suite and Editors from Red Giant Software’s web site. Go to www.redgiantsoftware.com/demos.html. What really makes the both the Suite and the Editors well worth the price are the stock looks (called favorites in Final Cut Pro and presets in Premiere Pro) that are included. These have been created by professional colorists who use Magic Bullet daily for television commercials and feature film work. Beside the manual, they are the best way to learn how to use the settings in Look Suite and they offer a great starting point to develop your own looks. The favorites are essentially copies of the Look Suite effect with predefined settings. Like with any plug-in that ships with favorites, I suggest experimenting with several favorites and tinker with the settings to learn how Look Suite works. From there you should be able to create your own look favorites.
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
181
LS Bleach Bypass
LS Dream Look
LS Mexicali
LS Bleach Bypass
Original Footage
LS Diffusion Max
LS Basic Cool Max
LS Neo
LS Berlin
Misfire Grain, LS Curahee, Misfire Basic Scratches, and Misfire Dust
LS No. 85, Misfire Dust, Misfire Basic Scratches, Misfire Heavy Scratches, and Misfire Weave
LS Bistro, Misfire Funk, Misfire Vignette, and Misfire Basic Scratches
Figure 17: With properly exposed footage, you have many creative options with Look Suite and Misfire
The Mac OS installer for Magic Bullet Editors does not install the Looks favorites into Final Cut Pro. To install the favorites for Look Suite do the following: 1. In Final Cut Pro, open the project file that came with Magic Bullet Editors named “Look Suite
Favorites.” It is in the same folder as the installer. 2. Select all the bins in this project and choose Edit>Copy. 3. Click on the Effects tab, Control-click on the Favorites bin and choose Edit>Paste. This will add
182
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
all the favorites to Final Cut, and you will be able to access them all from any project. To keep these separate from other favorites, add an MB Favorites bin and place all the Look Suite bins into this master bin. Understanding the Filter Settings If you plan to create your own looks or want the ability to tailor the preset looks a little more, I highly encourage you to become familiar with the plug-in’s four main categories: Subject, Lens Filters, Camera and Post. Also note that Magic Bullet Editors applies the settings to the clip in this order (from top to bottom). Subject Settings Subject has settings for making basic image adjustments before creating the film look. It’s similar to color-balancing a photo in Photoshop before altering it any further. FINAL CUT PRO
PREMIERE PRO
Do Subject applies all of the settings in this category to the image when checked. Pre Saturation controls the amount of saturation before the image is affected by the other settings in the effect. With well exposed footage it is customary to decrease the saturation slightly to better prepare it for the settings that follow.
Pre Gamma alters the image’s exposure. Both the Pre Gamma and Post Gamma sliders work in percentages and can cancel one another out.
Gamma setting, this setting and the Post Contrast settings are set in percentages and can balance each other out.
Pre Contrast alters the image’s contrast. Ideally, your video has an even distribution from light to dark. If the video you’re working with has too much contrast, use a negative value and add it back after adding the other settings. Like the Pre
Figure 18: Magic Bullet Editors’ Subject settings for Final Cut Pro and Premiere Pro
Lens Settings The Lens category has controls for emulating the visual effects of applying a diffusion or gradient filter to a lens. Diffusion filters give an image a soft, dissipated look by scattering the incoming light. Black diffusion is ideal for smoothing out skin blemishes and wrinkles or reducing the effect of digital compression artifacts. A gradient filter (also called a graduated filter) ramps smoothly from the top where it contains the most colorization to the bottom where there is none. They are primarily used to control the exposure of very bright skies that usually blow out. This particular form of gradient filter is called a Neutral Density (ND) filter. Unfortunately, nothing you do digitally in post can restore the information lost from over exposure. For this reason, you should use a Neutral Density filter when shooting outdoors in bright sun light.
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
183
There are also many gradient filters for creatively manipulating the sky’s appearance. They can make a sky more blue, or fake the appearance of a warm late afternoon sun. These are always best to avoid when shooting your video because they cannot be removed in post and you can often create the same effect with far greater control in post. FINAL CUT PRO
PREMIERE PRO
Do Lens applies all of the settings in this category to the image when checked. Grade measures the strength of the diffusion filter on a scale of 1 to 5 where 5 is stronger. As filters can be graded less than 1 (1/8 for example), the range is from 0 to 6. A setting of 0 turns the sub-category off entirely. In reality, gradient filters affect dark areas more than the light areas and the Gradient settings behaves this way. Size sets the size of the diffusion. For black diffusion, a low setting creates blooms around
highlights and a high setting reduces overall image contrast. For white diffusion... For the gradient category, a setting of 100% distrubutes the gradient from top to bottom. Increased settings extend the gradient’s end point beyond the bottom which colorizes more of the image. Highlight Bias controls how the diffusion affects highlights. For Black diffusion, a negative setting blooms highlights and a positive setting blooms dark areas. For White diffusion, a low setting blooms the entire image and a positive setting blooms only the highlights.
Color sets the base color for the gradient. Highlight Squelch sets the amount the gradient colorizes the highlights. A low setting tints the highlights slightly (and looks more natural) and a high setting tints the highlights more but can look fake if set too high. Fade controls the distribution of the gradient. A setting of 50% produces a linear gradient. Below 50% pushes the gradient’s midpoint towards the bottom while above 50% pushes the midpoint towards the top.
Figure 19: Magic Bullet Editors’ Lens settings for Final Cut Pro and Premiere Pro
Camera Settings The Camera category emulates effects created by how different film stocks react to light. The Three-Strip process is of particular interest to those who want to emulate the highly saturated and deep blacks of early color films such as The Wizard of Oz and early animated Disney films.
184
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
FINAL CUT PRO
PREMIERE PRO
Do Camera applies all of the settings in this category to the image when checked. 3-Strip Process emulates the three-strip dye transfer color process which was used for early color films.
Tint controls the amount of tint applied to the image. At 100%, it completely colorizes the image by using the tint color and luminance information. Tint Color/Tint Black Color are the colors used for the tint and the black tint.
Tint Black sets the amount of tint in dark areas. Tint Black Threshold sets what luminance values are affected. A high setting affects more of the image, but note that highlights are not affected even with a high setting.
Figure 20: Magic Bullet Editors’ Camera settings for Final Cut Pro and Premiere Pro
Post Settings The Post category applies final touches to the image. It has all the controls that the Subject category. These settings, as mentioned earlier, can effectively cancel out the equivalent settings in the Subject category. In most cases you will use the Post Gamma, Contrast, and Saturation controls not to undo what was done in the Subject category, but to exaggerate or fine tune the settings you’ve made in the Lens and Camera categories. Post also has two additional controls for fine tuning the image’s temperature or hue. The hue controls are meant for making subtle changes to the image when you or someone else wants the final image to be slightly cooler or warmer. Rather than redo the entire effect, these two sliders make these sort of changes a lot easier to accomplish. FINAL CUT PRO
PREMIERE PRO
Do Post applies all of the settings in this category to the image when checked. Warm/Cool Less than 0% makes the image warmer (orange) and greater than 0% makes it cooler (blue).
Warm/Cool Hue alters the effect’s definition of warm and cool. Less than 0% shifts the color balance towards green and above it shifts it towards magenta.
Pre Gamma alters the image’s exposure. Pre Contrast alters the image’s contrast.
Post Gamma alters the image’s saturation.
Figure 21: Magic Bullet Editors’ Post settings for Final Cut Pro and Premiere Pro
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
185
Misfire Misfire, its derivative plug-ins, and a few complimentary plug-ins all emulate the appearance of damaged film. While the Look Suite makes an image look like you took your time working with a colorist at a film lab, Misfire and its related plug-ins make the image look like the lab technicians and film projectionists abused it. While applying Misfire to any clip get’s certainly makes it look like it’s been damaged, there are many ways a projected film can look less than perfect: dust, scratches, fading or excessive contrast due to age and poor storage conditions, warping, gate weave in the projector, splotches caused by mold, vignetting, and grain caused by bad exposure. The primary Misfire plug-in has settings for Fading, Funk, Splotches, Dust, Flicker, Vignette, Displacement, Grain, and 3 different types of scratches: Microscratches, Basic Scratches and Deep Scratches. Like the categories in Look Suite, each Misfire category can be individually turned off. And each category has also been made available as a separate plug-in. I find applying the damage effects separately allows me to focus on one effect at a time as I build a look. Also, since categories cannot be collapsed in Final Cut’s Filters tab, using the separate plug-ins is a more efficient use of screen real estate when working with only a laptop. Film damage effects should be avoided if you are planning a film out since they affect resolution, and you need all the resolution you can get.
De-artifacting 24p and Applying a Look So you shot in 24p, have Magic Bullet Editors, but want to de-artifact footage. Since Editors does not de-artifact, you can apply either of the following chroma smoothing effects before applying Editors: •
Apple’s Color Smoothing - 4:1:1 effect.
•
Nattress Film Effects’s G Nicer 4:1:1.
The Color Smoothing effect comes with Final Cut Pro and is located in Effects > Video Filters > Key > Color Smoothing - 4:1:1. This filter is intended for smoothing edges in chroma-key footage shot in DV. G Nicer is part of Nattress’s Film Effects. (Look later in this chapter for the URL for pricing information.) It does a much better job at smoothing the edges in DV footage and also works for 4:2:2 footage.
Applying a Film-Look to Interlaced Video This workflow assumes you have interlaced footage, but simply want to alter the color balance and contrast of the video so it looks more like film. Since it’s nearly impossible to make poorly shot video look like film, shoot your video according to the guidelines listed in Chapter 4. To apply the standard effect in Final Cut Pro, choose Effects > Video Filter > Magic Bullet > Look Suite. To apply a preset, select Effects > Favorites (under Video Filters not Video Transitions). You can also drag and drop a favorite onto a clip by doing the following: 1. Open the Effects tab, open the Favorites folder and select the desired favorite. 2. Drag and drop the favorite onto a clip in the Sequence, Source or Program windows.
To apply the standard effect in Premiere Pro, choose Window > Effects, open the Video Effects > Magic Bullet effects folder, and drag the Look Suite effect on to a clip in the Timeline window. To apply a preset open the Look Suite Presets effects folder and drag any of the presets onto a clip in the Timeline window. 186
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
Magic Bullet Suite for Adobe After Effects Magic Bullet Suite turns After Effects into a finishing station. The Suite has the Look Suite and Misfire filters and Magic Bullet, Opticals, LetterBoxer, and Broadcast Spec. In your NLE, you edit using a very simple timeline, export reference movies, and apply the 24p conversion, looks, transitions, and titles in After Effects. IN FINAL CUT PRO, PREMIERE PRO, OR OTHER NLE Perform a cuts-only edit. The majority of footage should be on the A roll (or track). Gaps in the A roll reveal the B roll. The B roll contains only the clips that are used in transitions. If there are any overlaying titles, turn these off.
A B Leave enough overlap between the A and B rolls so fades and dissolves can occur between the two rolls.
Export each roll seperately as a QuickTime reference movie, and import these into After Effects. Export another low resolution movie with both rolls and any titles. Use this movie to align the rolls and recreate the titles after applying looks and transitions.
IN AFTER EFFECTS Apply the Bullet (Retime, Deinterlace, and Deartifact) Create a composition for each roll and apply Magic Bullet to each. If your footage is already 24p, you obviously won’t need to retime or deinterlace it, but you will most likely want to deartifact it since it does a good job of restoring color information.
Apply Look Suite Make new compositions with the bulleted rolls. Create and align adjustment layers for each shot you wish to alter. Apply Look Suite to the adjustment layer, modify the settings to your taste or use a preset look.
Apply Opticals Create a composition and add both rolls to it. Make sure they are in alignment, and then add a solid layer and apply the Opticals effect to it. Select the rolls for the source layers, apply keyframes and then select the desired optical effect.
Master Print
Broadcast or DVD MPEG2 Composition
Film Out Composition
Streaming Media Composition
Figure 22: An overview of the Magic Bullet Suite finishing process
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
187
While this workflow is somewhat rigid, it’s become quite common as Red Giant Software and The Orphanage have evangelized it. Here are the steps in more detail: 1. Edit using A-B rolls, and perform a cuts-only edit in your NLE. This method is modelled after the
way optical houses would print two separate assemblies of film into a master print. The A roll contains the bulk of the edited film while the B roll contains only the shots that the A roll will transition to occasionally. 2. Export each roll as a separate QuickTime reference movie. A QuickTime reference movie is not a
digital movie file, but a file with links timecodes in one or more original QuickTime movie files. After Effects treats the reference file as if it was one self-contained QuickTime movie file. This saves you rendering time, disk space, and you know you are working with footage that has not been recompressed.
Disable video and sound for the A roll Click on the video and audio buttons for the video track and both audio tracks belonging to the A Roll.
Do the same for the B roll Click on the video and audio buttons for the video track and both audio tracks belonging to the B Roll.
Choose File>Export>QuickTime Movie and uncheck Make Movie Self-Contained to generate a reference file.
Figure 23: Exporting the A and B rolls as QuickTime reference files
It is crucial to lock picture before importing the reference movies into After Effects. Changing the edit will likely mean that you have to redo all the same finishing steps again. Ouch! 3. Import the reference movies into a 16-bit After Effects project file. The filters in Magic Bullet
Suite are all capable of using After Effects Professional’s high-quality 16-bits per channel option. These effects are capable of performing very complex color and luminance operations on footage and will take advantage of the additional head room that 16-bits per channel offers. Working in 16-bit is helpful when you are working with SD and HD uncompressed formats or are planning a film out. After Effects Standard does not support 16-bit per channel effects, but Magic Bullet Suite takes this into account by internally computing color adjustments at a high color resolution and then rendering them into the Standard version’s 8-bit per channel color space without any 188
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
banding. Banding might occur when using the gradient or diffusion options in Look Suite.
Alt/Opt+click the 8 bpc button to covert the project to use 16-bits per channel rendering.
Figure 24: Setting a project to use 16-bits per channel for rendering 4. Create bulleted compositions (also known as comps) that retime, deinterlace and deartifact each
reference movie using Magic Bullet. » Before adding a reference file to a comp, you need to make sure the pixel aspect ratio is correct. Select each reference movie and choose File>Interpret Footage>Main to set this. For footage with a standard 4:3 aspect ratio, select D1/DV NTSC (0.9) for the pixel aspect ratio. For footage with a widescreen aspect ratio of 16:9, select D1/DV Widescreen NTSC (1.2). » Add each reference movie to a separate composition by click-dragging each reference movie to the New Composition button ( ) at the bottom of the Project window. This automatically sets the duration of the composition to match the length of the reference movie. » Choose Composition>Composition Settings and set the frame rate to 23.976. Finally apply Magic Bullet to reference movie layer. Auto Setup analyzes the video layer and comp frame rate and performs the correct interlaced to progressive conversion if neccessary. Footage that is already progressive is not processed. Deinterlace takes the interlaced fields from sequential frames and converts them into one progressive frame. Additional parameters are for selecting the source material;s video standard, field order, and parameters for fine tuning motion interpolation. Deartifacting reconstructs lost chroma information due to reduced color sampling. There are options for NTSC DV (4:1:1), HDV and PAL DV (4:2:0), Sony HD (3:1:1), and other variants of SD and HD (4:2:2). Consult the color sampling information for the codec you’re using. Even if footage is already progressive, Magic Bullet is still worth applying because the deartifacting can improve edge detail. Figure 25: Applying Magic Bullet to footage 5. Create look comps that reference the bulleted comps using adjustment layers and Look Suite.
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
189
» Create new comps from each of the bulleted comps. » If you are fine with applying a single look, you could simply apply Look Suite to the layer containing the bulleted comp, but that would greatly limit your options. To get complete control over the look for each shot, add adjustment layers that are above the bulleted comp layer but trim them so they align with each shot. Effects applied to the adjustment layer affect any layer below it and in this case, the bulleted comp. » Apply Look Suite to each adjustment layer and select a Preset or enter custom settings.
Select Load from the Presets drop down to show the Preset Browser.
Name each adjustment layer according to the shot or clip it is modifying.
Add adjustment layers above the bulleted comp layer and trim each layer so that it aligns with shots you want to treat. Color coding each layer helps indentify that layers use the same look.
Figure 26: Applying Look Suite to adjustment layers that are aligned to each clip 6. Create an opticals comp that uses both look comps and a solid layer for transitions.
» Add both look comps to the opticals comp and create a solid layer by choosing Layer>New>Solid. Don’t worry about the color, but do click Make Comp Size to ensure it is the right size. Name it opticals and click OK. » Apply the Opticals effect to the layer and set the A layer to the A roll look comp and the B layer to the B look comp. » Add keyframes for the A-B Dissolve setting at each of the overlap points between the A and B layers. A 0% setting shows A and a 100% setting shows B. Any value in between renders a nice optical dissolve between the two layers that mimics a dissolve created by an optical film printer. The keyframe values for the A-B dissolve should go from 0 to 100% when A dissolves into B and from 100 to 0% when B dissolves into A. 190
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
Once you set the A Layer and the B Layer, you can simply animate the Dissolve A-B parameter. The other settings fine tune the transition so it will perform more like a fade or burn.
Place keyframes where A and B overlap. Consult the reference cut, or prior comps to more easily locate these time locations. Figure 27: Applying Look Suite to adjustment layers that are aligned to each clip
Titles could be added to this comp, or another comp that references the Opticals comp. If you are doing a film out or are mastering to HD, you will want to create titles at the resolution your are targeting. This could be done after the final output from the workflow have been upcoverted. 7. Create output comps that reference the opticals comp and are configured for SD, DVD, web, HD,
or a film out. The output settings for each of these will depend upon your capture card, streaming media codec, or instructions from the facility producing your film out. Refer to the instructions provided in the documentation for your capture card or from the facility. Automatic Duck http://www.automaticduck.com Automatic Duck Pro Import AE converts a Final Cut timeline exported as XML into an After Effects comp while preserving links to the source material and timing information. This allows you to make simple trims and slip edits without having to return to Final Cut. It can also help in aligning adjustment layers and is useful when
you have to apply additional effects or chromakeying before running the suite of Magic Bullet plug-ins. To integrate this into your workflow, you output XML (use the free XML export plug-in that Automatic Duck has on their website) for each of the A and B rolls and import these. Add these to a comp for bulleting which is added to a comp for creating a look, and so on.
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
191
Nattress Effects for Final Cut Pro Graeme Nattress’s Film Effects are Final Cut Pro filters for making NTSC or PAL video resemble film. With this filter set you can perform each step in the film-look pipeline entirely within Final Cut Pro: retiming and de-interlacing, de-artifacting, and applying a look. You can download a demo version of Film Effects from Graeme Nattress’s web site. Go to www.nattress.com/filmEffects.htm.
G Film - Basic Diffusion V2.5
G Film - Day for Night V2.5
G Film - Sepia V2.5
G Film - Basic Bleach V2.5
Original Footage
G Film - Old Projector V2.5
G Film - Cold Diffusion V2.5
G Film - Green V2.5
G Film - Warm Diffusion V2.5
Figure 28: Nattress Film Effects ships with many preset looks
G Film G Film is the primary plug-in in Film Effects. G Film converts NTSC or PAL interlaced video to 23.976 fps progressive video. If you only need to convert interlaced footage to 24p and do not wish to apply a film look, use this filter.
192
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
G FILM Film Frames Per Second sets the resulting frame rate. PAL versions are listed in parantheses.
Anti-Alias Amount softens sharp vertical aliasing that may occur.
Media Field Order instructs the plug-in what field domiance to use when de-interlacing the clip.
De-Interlace Options sets what de-interlacing method is used. Smart is best, but takes longer. Smart Mask is used in conjunction with the Tolerance setting to instruct the plug-in in recognizing motion.
Output Field Order is used to set the resulting field dominance to match the timeline. Motion Blur is the amount of motion blur to apply when retiming the footage to 24p. Blending Type sets the blending type used to de-interlace the clip. Pulldown Pattern is for selecting the pull down pattern used to retime the clip.
Tolerance sets what areas in the image move. Output Cropping offers presets for applying a widescreen matte such as 16x9 (HD) or 2.35:1 (Academy). UA user definable matte is also available when selecting User Aspect and adjusting the User Aspect slider below.
Figure 29: G Film filter settings
G Film Plus and G Film RT G Film Plus has the 24p conversion settings as well as settings for creating a film-like image. G Film RT lacks the conversion settings. RT is best suited for those who want add a look to interlaced footage or don’t need the conversions options because they own a camera that shoots 24p. Since they both have the film-look settings, I’ll cover those by showing the RT interface. The Importance of Curves According to Graeme Nattress, the creator of Film Effects, the frame rate processing and curve controls are the most crucial in achieving a film-like image from interlaced footage. Curves are a common way to represent the distribution of brightness values in an image. Film tends to have an S-shaped curve which equates to strong blacks and extended highlights. Video by contrast, has a pretty sharp curve (curvy it isn’t!), and so the Curves settings helps take well lit and properly exposed video and make it more film-like by pushing the Black Curve Master setting into negative values and pushing the White Curve Master setting into positive values. VIDEO RESPONSE CURVE
FILM RESPONSE CURVE These are simplified curves for video and film. The video curve is fairly linear and quickly clips highlghts and crushes darks. The film curve has an S shape which produces more contrast while gently rolling off which preserves more detail in highlight and shadow areas.
Figure 30: Comparing response curves for video and film
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
193
G FILM RT (1 OF 2) MASTER CONTROL Master Effect Amount sets the filter’s strength (0–200%). Master Brightness sets image brightness, but should be avoided if you need fine control. Use the Curves category instead. CHROMA BLUR Amount sets the effect’s strength (0–100%). Blur Radius sets the blur radius (0–100%). Post Sharpen sets the amount of sharpening to apply post blur (0–100%). Black and White Diffusion Order specifies which order to apply black and white diffusion. BLACK DIFFUSION
Dark Light Polution leaks the warm/cool effect applied to the darks into adjacent areas. SATURATION Saturation Amount adjusts saturation. -1 is completely desaturated, 0 is normal, and 4 is very saturated. Also Desaturate? restricts the effect to light and dark areas. De-Saturation Amount 1 is completely desaturated, 0 is normal, -4 is very saturated. Invert Saturation Exchanges the settings between saturation and de-saturation amounts. TINT Tint Color A sets the first color for the tint.
Amount The strength of the black diffusion effect. Black Limit limits what range of darks receive diffusion (0–100%). Blur Radius sets the blur radius (0–100%).
Tint Color B sets the second color for the tint. Tint Amount sets the effect’s strength (0–100%).
Mode sets the blending mode for the diffusion.
Mode sets the blending mode for the tint.
WHITE DIFFUSION
Tint Blending uses the brightness values in the image to blend the tint.
Amount The strength of the black diffusion effect. White Limit limits what range of darks receive diffusion (0–100%). Blur Radius sets the blur radius (0–100%).
Tint Blending Invert blends on light (on) or dark (off) values in the image. TInt Gradient Enables a gradient to occur between A and B colors.
Mode sets the blending (or transfer) mode for the diffusion. Multiply is the default.
Tint Gradient Direction sets the angle.
TEMPERATURE
Tint Gamma adjusts the gradient’s midpoint.
Cold/Warm Hue adjusts the hue. Negative is cooler and positive is warmer.
BLEACH BYPASS
Cold/Warm Light adjusts the hue for highlights and works like Cold/Warm Hue.
Use Unprocessed instructs the filter to use the original image information and not the image resulting from the preceeding effects.
Cold/Warm Dark adjusts the hue for shadows and works like Cold/Warm Hue.
Bleach Effect Amount sets the effect’s strength (0–100%). Too much and the image is too dark.
Limit Light reduces the effect upon highlights as the setting is increased. (0–254).
Over Exposure sets the amount of overexposure. Use in conjunction with the previous control.
Limit Dark reduces the effect upon dark areas as the setting is increased. (0–254).
Bleach Sharpness sets the amount of gritty sharpness the effect creates.
Tint Gradient Amount sets the gradient strength.
Light Light Polution leaks the warm/cool effect applied to the highlights into adjacent areas.
Figure 31: G Film Plus filter settings (the remainder is on the following page)
194
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
G FILM RT (2 OF 2) CURVES
Length of Scratches sets scratch life span.
Apply Curves First will apply the Curves settings before any other effect in G Film.
Depth of Scratches sets scratch depth.
Strong Dither? applies a lot of dithering to the image. This reduce the appearance of banding.
Amount sets the flicker amount.
White Level adjusts the brightness of the image’s bright areas (0-255). Lower values increases overall image brightness.
HAIR SETTINGS
Scratch Weave sets the amount of scratch Show Curves displays an overlay on the clip showing movement across multiple frames. the curve settings. Scratch Thickness sets scratch thickness. Dither? applies subtle dithering to the image. This can reduce the appearance of banding. IMAGE FLICKER
Black Level adjusts the brightness of the image’s dark areas (0-255). Higher values increases the brightness in the dark areas. Gamma adjusts the midtones (-1 to 1). 0 is no change, negative values darken the midtones and positive values lighten them. White Output adjusts the bright areas (0-255). Lower values darken bright areas. Black Output sets adjusts the dark areas (0-255). Higher values brighten dark areas. White Curve Master adjusts the top end of the of the curve (the bright areas). (0-100) Black Curve Master adjusts the bottom end of the of the curve (the dark areas). (0-100) White Curve R, G, B adjusts the top end of the of the curve (the bright areas) for each discreet color channel. (0-100).
Amount of Hairs sets the amount of hair. Hair Movement sets the amount of jitter (0-100). Hair Length sets the length (0-100). Sticky Hairs sets the percentage of hair that doesn’t jitter (0-100%). Hair Curlyness sets hair curliness (0-10). Hair Thickness sets hair thickness (0-10). Hair Blend sets the opacity for the hairs (0-100%). DIRT Dirt Blend sets the dirt oppacity (0-100%). Amount of Dirt sets the amount of dirt (0-100). Size of Dirt sets the size of the dirt (0-50). WEAVE Gate weave occurs when projected film moves slightly side to side due to a faulty projector.
Black Curve R, G, B adjusts the bottom end of the of Weave Wavelength sets the number of frames the the curve (the dark areas) for each discreet color weave animation will take (2 - 100). channel. (0-100). GRAIN CONTROLS
Weave Amplitude sets the size of the weave (0-100%).
Amount sets the strength for the effect (0-100%). Grain Scale sets the size for the particles (0.5-4). Colour? colors the grain when on. Mode sets the effect’s blending mode. SCRATCH SETTINGS
CROPPING Output Cropping offers presets for applying a widescreen matte such as 16x9 (HD) or 2.35:1 (Academy). UA user definable matte is also available when selecting User Aspect and adjusting the User Aspect slider below.
Scratch Type sets black, white, or black and white scratches. Amount of Scratches sets the number of scratches.
Figure 32: The remainder of settings for G Film Plus
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
195
G Nicer 4:1:1 G Nicer attempts to reconstruct lost chroma information due to the 4:1:1 chroma sampling in the DV codec. Recall that the 4:1:1 means that for every four pixels of luminance information, only one pixel of color information is kept. I’d recommend using this filter when you shoot chromakey (blue screen or green screen) shots with a DV camera or if you want to slightly improve the quality of you footage before upconverting the footage to a format (SD or HD uncompressed for example) that has better chroma sampling such as 4:2:2 or 4:4:4. G Nicer compares the luminance channel with the color channels for each pixel in the image and from that reconstructs the lost color information. Since more information (such as edge detail) is retained in the luminance channel, G Nicer does a better job than simply blurring the chroma channels. G NICER 4:1:1 Diagnostics shows the smoothing results for the final image as well as before and after for the discreet luminance and chroma channels.
to edges or everywhere. Sharpen is the how much reconstructed information is added.
Linear to Smooth sets the strength for the effect. Sharpen specifies where to add the reconstructed detail. This is useful when the image would look better with detail only added along edges, interrior
Figure 33: G Nicer 4:1:1 filter settings
To make best use of the improved color sampling G Nicer provides, apply it to clips in an uncompressed (4:2:2 or 4:4:4) timeline. If the footage remains in a DV timeline, G Nicer doesn’t provide any benefit because a DV timeline uses the DV Codec which will reduce the color information recreated by G Nicer. Also note that it adds significantly to overall render time. Picking the Right Filter to Match Your Workflow Since Film Effects includes well over a dozen plug-ins, here’s a list that matches workflows with plug-ins. • Converting Interlaced Video to 24p: Use G Film when you want to convert interlaced footage to progressive but don’t want any additional processing. This works well for those who need to edit in 24p, but don’t have the time it takes to de-artifact footage, or are relying on another process in post to deartifact the footage later. • Converting Interlaced Video to 24p and De-artifacting: Use G Film and G Nicer 4:1:1 when you are working with interlaced chroma key footage. G Nicer will smooth the edges of your subject and facilitate pulling cleaner mattes using keying software. This works well if you are using the keyers that come with Final Cut Pro. Note that new keyers, such as DV Matte Pro by DV Garage, use the luminance channel to pull a key so while G Nicer will clean up the edges in the color channels, it’s not as necessary in pulling a clean matte.
196
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
• Converting Interlaced Video to 24p, De-artifacting, and applying a look: Apply G Nicer and G Film Plus in that order. G Nicer will de-artifact and use the settings in G Film Plus to convert to progressive and apply a film look. • De-artifacting 24p video: If your footage has already been shot in 24p and you want to improve the color channels for chroma keying or bumping it up to better format, apply G Nicer 4:1:1. • De-artifacting 24p video and applying a film look: Apply G Nicer 4:1:1 and G Film RT in that order. Modify G Nicer to de-artifact the video and apply G Film Plus or one of its presets to apply a film look. The Rest of Film Effects
•
G Film Extra
Many of the additional plug-ins are presets, or are a subset of one of these three plug-ins. In addition the derivative plug-ins, Film Effects includes several additional plug-ins for applying a film-like gamma curve to an image, applying film-like transitions, and sharpening filters:.
•
G S-Gamma, G Simple S-Gamma, G Simple S-Gamma Plus, G Gamma S-Gamma
•
G Chroma Sharpen Interlaced and G Chroma Sharpen Progressive
•
G Film Flash
•
G Film Flash Transition
•
G Film Dissolve
•
G Vignette
•
G RGB Color Mixer
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
197
I’ve also edited short films in 24p for other filmmakers. The last short I did, One Weekend a Month, won an honorable mention at Sundance.
What formats are you shooting with? Who Are They? was shot with the Sony HDWF900 CineAlta. I Want My Mocha used the Sony FX-1 HDV camcorder. I’ve also edited films shot with the Panasonic Varicam. Case Study: Anthony Lucero Anthony Lucero is a filmmaker and an effects editor in the Bay Area.
What is your background? I graduated from San Francisco State’s Cinema Studies Program in 1995. While I wanted to direct my own films, that wasn’t going to happen overnight, so got into the industry as an editor. In school I had two internships as a production assistant: one on a feature film and another at a local television station.
Why did you pick post over production? With production, you are often weeks or months between jobs. Post, on the other hand, is steady work. I also found that I enjoyed cutting and so I had another internship at an advertising agency and that led to a full time position cutting commercials.
How did you get into effects editing? After a while, I felt constrained by the 30second time limit. I wanted to work on longer form material and features. I got a job as an editorial assistant at ILM that led to editing visual effects shots.
What are your personal films like? I’ve directed two short comedies. The first film, Who Are They? is about a guy obsessed with trying to find out who “they” are. They being the people whose opinion is known, but are nameless. For example, “They say it’s a great restaurant.”
What was your editorial workflow? After the footage was captured, it was downconverted from HD to DV resolution. I received the footage on a hard disk and I edited using a 17” PowerBook, which I recommend for its great screen. When I finished the edit, we reconnected the project to the HD source.
Can you share any organizational tips? Sure. If there is footage that I know I’m not going to use, I add a “z” to the filename. This moves the clip to the bottom and it allows me to focus on the other clips. Anything that I want at the top of the list I add an “a” or “01.” I create bins for my master sequences, for each scene, and for archiving sequences. I also like to have bins for scratch music and sound effects as well as a bin for final sound files from a sound designer.
How do the archiving bins work? Let’s say I make a small stylistic change that I may not want to keep, I copy the original sequence before making the changes and put the copy in this bin. Since sequences don’t add a lot to the file size, I do this a lot.
Individual clips or long clips? I usually work with long clips and create sub clips for each take. I name the sub clips using the form scene-take, so 1-2 would be scene 1 take 2. I move the description column in the Browser window next to the clip name column and use it for shot angles and other important notes.
The second film, I Want My Mocha, is about a botched attempt at holding up a coffee shop.
198
Chapter 5: 24p Editorial and Postproduction | Converting Interlaced Video to 24p
24p Output Options Digital filmmaking offers many methods of distribution for the filmmaker: DVD, broadcast, film, and Internet streaming. Now, the filmmaker has all the means necessary to take projects and prepare them on the desktop for any of these mediums.
Chapter 6: 24p Output Options |
199
Compressing 24p for DVD Making a DVD from your progressive footage involves five steps: completing the edit, transcoding video, designing menus, DVD authoring, and building the project (burning a disc or writing a DLT). In this chapter I will cover transcoding video, a vital part of this process. Some important subjects that I don’t cover are menu design, DVD auhoring, and building. For these subjects, I suggest reading the first book listed below and one of the other two, which one depends on your authoring application: • DVD Authoring and Production by Ralph LaBarge for a thorough introduction to DVD production and technology. • Designing Menus with Encore DVD by John Skidgel (me) for creating easy-to-use DVDs with Encore DVD. • Designing Menus with DVD Studio Pro by Alex Alexander and John Skidgel for creating easy-touse DVDs with DVD Studio Pro. MPEG-2 encodes both interlaced (60i) and progressive (24p) video. Since commercial film production is based upon 24 fps, many studio DVDs are encoded at 24p. These look better than interlaced DVDs on monitors that support progressive display. 24p DVDs also have six fewer frames to compress. This allows for a higher bit rate and quality. In support of older televisions, nearly all DVD players apply 3:2 pulldown to the footage for televisions that don’t support progressive display. The DVD MPEG-2 encoder you use must support encoding at 24p and insert the 3:2 pulldown flags required by the DVD player in order to display the footage on interlaced televsions.
HD DVDs While hugely popular, standard definition DVDs will eventually be phased out like records were with the development of compact discs. High definition DVDs will replace standard once the consortium of entertainment and hardware and software companies can agree on a format. High definition DVDs will offer high definition in interlaced and progressive formats and will offer three-to four times the capacity of standard definition DVDs. If you’re shooting standard definition video, there’s no
need to worry about compatibility as most if not all high definition DVD drives will be backwards compatible. The next generation formats also offer improved codecs (H.264 from Apple and the MPEG-4 Licensing Authority and VC-9 from Microsoft) as well as MPEG-2. Lastly, the newer formats promise greater interactivity, Internet connectivity, and the ability to write information to the disc. At the moment it’s too early to tell, but by NAB 2006 a few HD DVD options should be available.
MPEG-2 Overview MPEG, pronounced “em peg”, stands for Moving Picture Experts Group. MPEG is a collection of international standards for digitally compressing audio and visual information. The compression and decompression algorithm is often called a codec (compressor/decompressor). Compression is the method of shrinking the file size, and decompression is the method of reinterpreting the compressed file for playback. 200
Chapter 6: 24p Output Options | Compressing 24p for DVD
The MPEG collection of standards includes MPEG-1, MPEG-2, and MPEG-4. DVDs use a strict form of MPEG-2. The strict rules define the frame size, frame rate, aspect ratio, GOP (group of pictures) length, and maximum bitrate. In DVD authoring, bitrate is widely defined as the amount of data in megabits (Mbits) read from the disc per second. The maximum bitrate allowed for a DVD is 9.8 megabits per second. Bitrate is calculated from the video stream and all audio and subtitle streams. Video streams are the largest, followed by audio, and the smallest are subtitle streams. It is probably best to keep the bitrate under nine megabits per second so that there is a little headroom for the player. Also, even though there is not a set minimum bitrate for video, two megabits per second or less produces poor video quality and should be avoided. Like many digital video compression methods, MPEG-2 employs intraframe and interframe compression. Intraframe compression reduces the size of a single frame, whereas interframe compression looks at similarities across a range of frames in order to shrink file size. A GOP is the smallest range of frames in an MPEG-2 video stream and is composed of frames with more detail (I frames) and lesser detail (P and B frames; P frames contain more information than B frames.) MPEG-2 video streams that are DVD-legal have GOPs that are fifteen frames long. The three important things to remember about MPEG-2 are: GOP placement, bitrates, and bit budgeting. Exceeding the bitrate can cause a DVD player to crash, and using high bitrates for all content on the disc can consume all the allocated space for the disc. This will leave you with no room for additional content. GOP Placement Knowing where GOPs are placed in a video stream is important when you set chapter points. Since chapter points can only exist on the boundaries of a GOP, there are times when the exact frame you want to mark as a chapter is not available because it is within fifteen frames of the nearest GOP or another chapter. A GOP in DVD-compliant MPEG-2 video starts at an I frame and is 15 frames long.
I
B
B
P
B
B
P
B
B
P
B
3
1
B
P
B
B
I 4
2
3
00;00;07;00
4
5
6
00;00;07;15
The closest any two chapter markers can be is 15 frames apart.
Figure 1: Legal locations for chapter points in an MPEG-2 DVD videostream
If you set chapter markers before transcoding video to MPEG-2, most encoders will attempt to place GOPs where chapters occur. This ensures the accuracy of your markers.
Chapter 6: 24p Output Options | Compressing 24p for DVD
201
Constant and Variable Bitrates With the constant bitrate compression method, the data rate is held constant regardless of what is being compressed. Portions that do not require the full data rate waste space. Portions that require more than the full data rate suffer in quality. By contrast, the variable bitrate method analyzes content in multiple passes and varies the data rate based upon specified data rate targets. Portions that need detail are given the maximum amount of bandwidth, and less detailed sequences are given lower amounts. CONSTANT BITRATE
VARIABLE BITRATE Maximum Data Rate
Average Data Rate
Constant Data Rate
Minimum Data Rate Figure 2: Comparing constant and variable bitrates
Consider the following when prepping your video in your NLE: • Maintain the highest level of quality by outputting movies at high quality and transcoding in the DVD authoring program. This approach is useful when you have a lot of footage and wish to fit it all on a DVD. DVD Studio Pro and Encore DVD both have “best fit” encoding algorithms for getting the best bitrate while making maximum use of the disc. An alternative is to compress directly from the timeline. This approach is convenient when you are well within the final disc’s capacity, wish to avoid additional rendering time, and have limited hard drive space for an uncompressed render. • Add compression markers (Final Cut Pro only) to enforce GOPs at important frames in your edit. Compression markers instruct Compressor to force an I-frame at this time location and ensure that the frame will have the quality it needs. • Add chapter markers. Adding chapter markers before media is transcoded forces GOPs at the chapter location, additionally they make each marker a destination for DVD navigational commands, such as button links and scripts. Bit Budgeting For short video clips, for example, an introductory animation or a motion menu, use a high bitrate. You may do this since it doesn’t have additional audio streams and the short length ensures that a lot of disc space won’t be consumed. Also, as mentioned, video quality benefits from a higher bitrate. If you have one long video clip (an hour or more) with multiple audio streams or several video clips of ten minutes or more, use a smaller bitrate so that when audio and video bitrates are calculated together there is enough bandwidth available. Table 1: Resolution for NTSC and PAL television standards
Television Standard
202
FPS
Resolution
Pixel Aspect Ratio
NTSC DV/DVD Standard (4:3)
29.97 or 23.976
720 ×480
0.9 × 1.0
NTSC DV/DVD Widescreen (16:9)
29.97 or 23.976
720 ×480
1.2 × 1.0
Chapter 6: 24p Output Options | Compressing 24p for DVD
Choosing a Disc Size When calculating available bitrate, you need to factor the length of your project, all additional assets, and weigh that against the size of the target media you’re using. The options available in most DVD authoring packages are: • 700 MB: CD-R media. CD-R media is playable on most computers, but not on all set-top boxes. • 4.7 GB: (DVD-5) DVD-writable media such as +R, -R, +RW, and -RW. • 8.54 GB: (DVD-9) A dual-layer DVD-Video disc. Dual-layer discs require access to a drive capable of burning dual-layer discs, or a Digital Linear Tape (DLT) tape drive. You then send the DLT to a replication facility. Table 2: Comparing disc capacity and bitrate assuming a single Dolby Digital audio track
Average Bitrate (Mbps)
Single Layer (4.7 GB/DVD-5)
Dual Layer (8.54 GB/DVD-9)
3.5 (low quality)
133 minutes
241 minutes
6.0 (medium quality)
94 minutes
170 minutes
8.0 (high quality)
65 minutes
118 minutes
Making a 23.976 DVD with Apple Pro Video Applications A common workflow is completing your edit in Final Cut, exporting to DVD-compliant MPEG-2 using Compressor, then importing the resulting files into DVD Studio Pro. Setting Markers in Final Cut Pro To add a compression marker: 1. Move the playhead to the location in time you wish to mark. 2. Press the M key twice in rapid succession. Pressing the key twice adds a marker and displays the
Edit Marker dialog. If you already have markers in your sequence and want to make them compression markers, move the playhead to the marker location in time and press the M key once. 3. In the Edit Marker dialog, enter a short descriptive name for the marker. 4. Click Add Compression Marker. Notice how it adds
to the Comment field. 5. Click OK. An overlay appears in the Canvas window with the marker information.
Chapter 6: 24p Output Options | Compressing 24p for DVD
203
Figure 3: Setting compression markers in Final Cut Pro
To add a chapter marker: 1. Move the playhead to the location in time you wish to mark. 2. Press the M key twice in rapid succession. This adds a marker and shows the Edit Marker dialog. 3. In the Edit Marker dialog, enter a short descriptive name for the marker. 4. Click Add Chapter Marker. Notice how it adds to the Comment field. 5. Click OK.
Figure 4: Setting chapter markers in Final Cut Pro
Chapter and compression markers must be set in a sequence. Markers are set in clips, regardless of whether or not they have or tags. Sending Sequences to Encode in Compressor Final Cut Pro does not include the functionality to transcode DVD-compliant MPEG-2 video, but it integrates nicely with Compressor, Apple’s stand-alone compression utility that is bundled with Final Cut Pro and Final Cut Studio. Compressor is a robust compression utility offering export options for the most common versions of QuickTime and MPEG-1, 2, and 4. It also includes the 204
Chapter 6: 24p Output Options | Compressing 24p for DVD
ability to create custom presets, do batch processing, and distribute transcoding across a network of computers. To send a sequence to Compressor for MPEG-2 transcoding: 1. Select the sequence you’d like to export as an MPEG-2 file. 2. Choose File > Export > Using Compressor. Final Cut launches Compressor and adds the se-
quence to the Batch window. 3. In the Batch window, click the pop-up menu in the Settings column and choose any setting
prefixed with “DVD.” The setting you choose depends on the sequence’s length, its frame aspect ratio, and how quickly you wish to encode it. After selecting a setting, you will see that two or three entries are now listed below the sequence name. These entries are targets for the optimized MPEG-2 video and audio files that Compressor will create from the sequence.
Figure 5: A sequence added to the Batch window in Compressor 4. Click Submit. Compressor will begin transcoding the files, and you may import these into DVD
Studio Pro or iDVD when they are complete.
Chapter 6: 24p Output Options | Compressing 24p for DVD
205
When video is exactly 24 fps, Compressor skips one out of every 1000 frames to conform it to 23.976. If the video is already 23.976, Compressor progressively encodes all the frames without skipping.
Creating Markers with Adobe Premiere Pro A sequence marker in Premiere Pro can be set as a chapter point for use in Encore DVD. When using Premiere Pro 1.5, chapter markers can be embedded in MPEG-2 and AVI files. To add markers in Premiere Pro that are readable in Encore DVD follow these steps: 1. Open the Program view or Timeline window. Move the CTI to a relevant time requiring a chapter
marker. 2. Choose Marker > Set Sequence Marker > Unnumbered or press * on the numeric keypad. 3. Double-click the marker in the timeline ruler. The Marker dialog appears. In the Marker dialog,
enter a name for the chapter in the Chapter field and click OK. Repeat steps one through five until all chapter markers are created. 4. Select the sequence in the Project window and choose File > Export > Adobe Media Encoder.
Choose “MPEG2-DVD” from the format drop down menu. Choose a transcoding preset that matches the video format and video quality desired for the project and click OK to save the resulting file. 5. Import the file into Encore DVD and create a timeline from it by selecting the file and choosing
Timeline > New Timeline or press Ctrl+T with the file selected.
Double-click a marker.
Enter a name for the Chapter.
Figure 6: Creating DVD chapter points in Premiere Pro
206
Chapter 6: 24p Output Options | Compressing for Internet Video
Encore DVD has rules regarding the proximity of chapters to ensure compliance with the DVD specification. When creating chapter points, place them at least fifteen frames apart because Encore DVD moves chapter points that do not follow this rule.
Compressing for Internet Video Video is becoming more visible on the Internet as consumer adoption of broadband increases and consumers increasingly expect richer experiences with video and audio. Technologies like Flash and dynamic HTML make user experiences better. Flash Video, QuickTime, and Windows Media are probably the most prevalent codecs used on the Internet, while Real Media is used a lot too, but it isn’t growing as fast as the others.
Factors that Influence Compressing Video for the Internet When delivering video over the Internet, compressing video is influenced by the: amount of motion, frame rate, bit rate, Internet access speed, and the bandwidth quota for your web site. Motion Sequences such as sporting events, dance, or action scenes require more care during the encoding process to look good. This is usually accomplished through multiple passes of analysis during the encoding process and by more efficient compression schemes such as variable bit rate encoding. This usually translates into longer encoding times. Frame size and Frame Rate Large frame sizes require higher bit rates and are larger in file size. When video contains detail that is crucial to the viewer’s understanding, crop the video to focus on the important area because shrinking the entire image down makes it practically worthless. Interviews, or “talking head” video can be much smaller since the framing is usually tight on the subject and the audio is usually more important than the image. Frame rate is the number the frames per second. Higher frame rates create smoother motion, but also create larger file sizes because there are more frames to compress and because higher bitrates are required. It is recommended that you work in multiples of the source’s original frame rate. For example, with 24p, you can consider encoding at 24, 12, 8, or 6 fps. When there is little motion in the video (such as Talking head video), you can often get away with lower frame rates because the sound is often more important that image. Balancing Audio and Video Bit Rates Bit rate is the amount of data (measured in bits or bytes) per second that the encoded video and audio require for smooth playback. The bitrate you choose is ultimately influenced by your audience’s Internet access speed and the processing power of their computer, but it is also influenced by how you plan to deliver your video (via a streaming server or by progressive download), the quality (inherent motion, frame size, and frame rate) of the video and the quality of the audio (number of discreet audio channels and sample frequency). The more data footage occupies per second, the higher the quality and the slower it is to download. If the footage does not contain a lot of motion, you can choose a higher bit rate. Voice-only recordings can also be heavily compressed if the original source audio was cleanly recorded. Chapter 6: 24p Output Options | Compressing for Internet Video
207
Internet Access Speed of Your Audience The speed at which your audience accesses the Internet will often dictate the speeds at which you offer your material. If a signifigant portion of your audience accesses the Internet using low bandwidth dial-up modems. you will certainly want to publish a video with a total bitrate around 40k a second, which is probably going to be a frame sized at 160 by 120 pixels at a frame rate of 8 fps. Conversely, users accessing the Internet over DSL or cable modems can easily handle a total bitrate of 220k a second. This equates to a frame size of 240 by 180 pixels at 12 fps. Table 3: Rough guidelines for compressing 24p progressive video
Access Speed
Total Bit Rate
Video Bit Rate
Audio Bit Rate
Frame Size
Frame Rate
56 Kbps Modem
40–80K
24–64K
16K mono
160 × 120
8–12 fps
DSL or Cable
200–400K
176–336K
32–64K mono
850–1400K
754–1304K
96K stereo
High-Speed Local Area Network
240 × 180 or
12-24 fps
320 × 240 480 × 360 or 640 × 480
24 fps
When compressing anamorphic 16:9 material, your frame size choices are 160 × 90, 240 × 135, 480 × 270, or 640 × 360. The disk space and network transfer settings for your web site Disk space is the amount of data you can store on your website at any given time. Network transfer is the amount of data your site can transfer per month. Both of these are set by your hosting provider as part of your hosting plan. While HTML and optimized graphics and Flash movies are fairly small, Flash Video can occupy a lot space and can fill up your web site’s disk space quickly. When thousands of users download videos from your site, Once you have gone over the network transfer quota, your hosting provider may block additional visitors or charge you extra for any additional transfers. If you plan to serve a lot of video or if your site becomes very popular, you either will want to lower the quality of your video or make sure you have a large quota for network transfers. Choosing a Media Delivery Platform The format you choose to deliver your media is an important decision when publishing video because the various media platforms (Flash, QuickTime, or Windows Media) offer different functionality beyond their codecs. More importantly the market penetration (referred to as reach or ubiquity) of each of these media platforms and their cross-platform support are also important considerations. Flash Video offers the most functionality since the entire Flash Platform can be leveraged behind it. Flash Video and Windows media have the greatest reach. Flash and QuickTime have the best cross-platform support.
Flash Video Perhaps the fastest growing Internet video standard is Flash video (FLV). With the introduction of the Sorenson Spark codec in Flash Player 6, many web designers and video producers have increasingly embedded video in rich internet applications (RIAs), movie trailer sites, and web-based training using the ubiquitous Flash player.
208
Chapter 6: 24p Output Options | Compressing for Internet Video
Available Flash Video Codecs Flash Player 6 and above supports the Sorenson Spark codec. Flash Player 8 and above supports the On2 VP6 codec, which offers greater quality at the same bitrates and supports an 8-bit alpha channel for compositing video on top of Flash backgrounds or interfaces. Sorenson Spark is the way to go when you want the greatest reach. The On2 VP6 codec is obviously the choice when you want better quality and more creative options. Macromedia also provides JavaScript browser-detection code that makes it easier for users to update to the latest Flash player when they try to view On2 VP6 codec without having Flash Player 8. Video Delivery Methods Supported by Flash Video Flash video supports three forms of delivering video: embedded, streaming, and progressive download. Embedded video should only be used for very short and small video clips: usually no more than ten seconds worth of video at thumbnail resolution (80 by 60 pixels, for example). Keeping FLVs external is considered a best practice since it offers better performance and memory management and the FLV and the main SWF file can have frame rates independent of one another. Streaming Flash video is delivered by sending the video directly to the desktop from a server equipped with the Flash Media Server. This machine is dedicated to streaming flash content such as video. The server is suited for delivering video to many users simultaneously. If you cannot install a Flash Media server on your website, Macromedia has partnered with a few Content Delivery Networks (CDNs) that are licensing Flash Video Streaming Services for streaming Flash video. Akamai, Vital Stream, and Mirror Image are a few. You may upload the FLV file to a server and use one of their skins or a skin on your server that references the stream’s location on their server. Progressive download is not to be confused with progressive video frames. With progressive downloads, the client computer downloads the video content from a Web server. Progressive downloads are great when you don’t have access to a Flash Media server and simply want to put the files on your website along with HTML, images, and other web documents. Encoding Utilities for Flash Video 8 With the introduction of Studio 8, Macromedia offers three convenient ways to encode Flash video that share the same presets and user interface for compression settings: • The Import Video Wizard in Flash Professional. A wizard functioning inside of Flash 8 Professional. It works great for importing video into a single Flash project. It walks you through picking a video, setting compression options, and picking a skin (a user interface) for playing, pausing, skipping chapters, and adjusting sound. • The Macromedia Flash 8 Video Encoder. A stand-alone application for encoding a single file or a batch of video files. This utility can be installed separately from the Flash authoring application on a dedicated video workstation. It’s also one method for encoding Flash video prior to inserting it onto a web page using Dreamweaver. • The Flash Video Encoding Settings export module for QuickTime. This module is available to any product that supports QuickTime export modules. Chapter 6: 24p Output Options | Compressing for Internet Video
209
The Encoding tab is for setting video and audio compression settings.
Select a clip and click the Settings button. By default the On2 VP6 codec is used to encode video.
Cue points used for navigation or triggering events are set in the Cue Points tab.
The Crop and Trim tab has controls for cropping and trimming the video.
Figure 7: Flash Video export options Table 4: Encoding parameters available in the Flash Video export module
Parameter
Description
Encode Video
Adds video to the flv file.
Video codec
Options are the On2 VP6 codec that is supported by Flash Player 8 or the Sorenson Spark codec which is supported by Flash Player 6 and 7. Only available for the On2 VP6 codec. This will encode and include an 8-bit alpha channel in the flv
Encode alpha channel
file for compositing the clip over a background Flash movie. An alpha channel needs to be present in the source video file.
Frame rate Key frame placement
210
The frame rate for the movie. This defaults to the source’s frame rate, but other rates are listed. Multiples of the frame rate works best. Automatic or custom. Automatic allows the encoder to pick the best interval for keyframes and custom allows you to pick the interval.
Key frame interval
The distance between unique frames. Lower numbers means better quality, but increases file size.
Quality
Presets for the data rate field which controls bitrate and quality.
Max data rate
The higher the bitrate, the better the quality.
Resize video
Enables resizing
Width
The width of the resulting flv file.
Height
The height of the resulting flv file.
Maintain aspect ratio
Constrains the aspect ratio to the original source.
Encode Audio
Enables audio encoding. Only MP3 is available.
Data rate
The audio data rate. Any rate between 16kbps – 235kbps is possible.
Chapter 6: 24p Output Options | Compressing for Internet Video
Cue Points Cue points are similar to chapter markers. A cue point marks a place in time and makes the point a destination that can be accessed by clicking a button or link, or it can be an event where ActionScript code is triggered. Setting cue points is done in the Cue Points tab of the Flash Video 8 encoding interface. To do this: 1. Scrub to the location you wish to mark. 2. Press the Add Cue Point button. 3. Give the cue point a name. 4. Specify the type of cue point.
Drag the current time indicator to the location in time where you would like to add a cue point. Use the Left and Right arrow keys to make small time movements. Click the Add button. Change the Type to Navigation for something similiar to DVD chapters, and use event for use with ActionScript.
Figure 8: Creating Cue Points in the Flash 8 Video Encoder
Trimming and Cropping Trimming allows you to set in and out points for the exported flv. If you don’t have an NLE on the machine on which you’re encoding, or simply want to encode a known portion of video, this is the simplest way to encode a small portion of the source video. The cropping options work exactly like cropping a photograph in Photoshop. If you’re encoding footage that has been letterboxed, you can use the crop options to remove the black bars that appear at the top and bottom of the frame. When preparing video for distribution over the Internet, you do not have to worry about action and title safe areas. Computer screens do not suffer from underscan and overscan problems. Flash Video Components or Skins A flv file by itself cannot be played in the Flash Player because it requires a swf file to play it back. Most flv files are wrapped in a containing swf that references it in addition to another swf file known as a skin file. A skin file provides navigation and playback controls.
Chapter 6: 24p Output Options | Compressing for Internet Video
211
The Flash Video and skin SWF files are external to the container SWF that reference them.
Flash Video
Container .SWF
Skin .SWF
HTML page
The container SWF file is referenced in an HTML page and presented to web site visitors.
Figure 9: The relationship between a container SWF, a skin SWF, and the FLV file
The FLV Flash Components (Flash Professional 8 and Studio 8 only) The FLVPlayback component was revised in Flash Professional 8, and it now includes 32 unique skins that vary in color, layout, and controls. After adding video to the Flash movie, use the Select Skin dialog to choose a skin. Importing Video and Choosing a Skin The Import Video wizard in Flash Professional 8 simplifies the process of embedding a link to a FLV inside a SWF and choosing a skin for it.
Figure 10: The Import Video wizard in Flash Professional 8 1. In Flash Professional 8, create a new Flash file. It may help to keep it close to the flv file. 2. Choose File > Import > Import Video. 3. Locate the flv file you wish to use and click Next.
212
Chapter 6: 24p Output Options | Compressing for Internet Video
4. The Deployment options appear. Select Progressive Download from Web Server if the flv will
reside along other files on your webserver. If the file will be streamed from a Flash Media Server, choose one of the streaming options. Click Next. 5. In the Select Skin step, choose a skin from the Skin pop up menu. Click Next. You can change
the skin later by editing the object’s skin paramter in the Component inspector. 6. In the Finish Video Import screen, confirm the settings and click Finish. A FLVPlayback compo-
nent with the correct size appears on the Stage. 7. Choose Control > Test Movie to preview the video and skin file.
Publishing the Flash Video File To publish the video, you choose File > Publish and Flash outputs two swf files and a HTML document that references the swf and the flv files. You can upload these three files to your web server, or you can open the HTML document and copy and paste the code from it into a page on your web site. Since the Flash Video Components rely upon performance enhancements in the Flash 8 Player, using these components requires Flash Player 8. To make things easier for users, the Publish Settings dialog in Flash 8 can embed JavaScript code that detects whether the user has Flash Player 8 installed. Most users who do not have Flash Player 8 and come across your video can usually install the version 8 player in a minute or less. Flash Video and Dreamweaver Dreamweaver 8 integrated the Flash Video extension that was previously a separate extension. In this integration, three additional skins were added, the Flash detection code was improved, and it was updated to work with both the Sorenson Spark and the O2 VP6 codecs. This extension only works with flv files. If your video has not been encoded as Flash Video, you will need to encode it with Flash 8 Professional, the QuickTime export module for Flash Video, or the stand alone Flash Video 8 Encoder. To add Flash Video to a page in Dreamweaver: 1. Select Insert > Media > Flash Video or select the Insert Flash Video button in the Insert > Media
toolbar.
Figure 11: Adding Flash Video from the Insert bar in Dreamweaver 8 2. Select the Video type. In most cases this is Progressive Download Video. If you have a Flash Me-
dia server, choose Streaming Video. 3. Click Browse and locate the flv file.
Chapter 6: 24p Output Options | Compressing for Internet Video
213
4. Choose a skin. Preview images appear below the control and show what the skin looks like. 5. Click detect size to properly size the swf file. 6. Select additional options, such as auto play, auto rewind, and customize the message, appear for
users who do not have Flash installed. Click OK when finished.
Figure 12: The Insert Flash Video dialog in Dreamweaver 8
If you need to change the settings, select the Flash object in the Design view and change the settings in Dreamweaver’s Property inspector. To learn more about implmenting Flash Video in Flash movies and web pages, visit: http://www.macromedia.com/devnet/flash/video.html
QuickTime Apple’s QuickTime, the granddaddy of digital media architectures and formats, has many codecs that are suitable for CD-ROM, broadcast, archiving, and distribution over the Internet. While QuickTime is practically a platform and has many legacy codecs, this section covers two codecs: MPEG-4 and H.264. These codecs are the most relevant to those doing QuickTime progressive and streaming downloads today because MPEG-4 is supported in QuickTime 6.0 and above and H.264, introduced in QuickTime 7, offers the best quality. QuickTime can be exported from any application that supports it. Final Cut Pro, Motion, After Effects, and Premiere Pro all do. If you own Final Cut Pro or the Final Cut Studio, you will most likely begin to compress movies as QuickTime using Compressor, the encoding utility that ships with both products. It offers 214
Chapter 6: 24p Output Options | Compressing for Internet Video
a preview, presets, a more professional-looking user interface, batch encoding, and distributed encoding using Apple’s clustering software, QMaster. MPEG-4 MPEG-4 Part 2 was perhaps the first of the high quality codecs that could be distributed on cell phones and desktop computers. While some may debate its quality compared to previous generation codecs, such as Sorenson Video 3, its platform scalability makes it a better codec choice for low bandwidth movies and mobile devices that support it. The Compressor 2 presets that have a “(QuickTime Compatible 6 Compatible)” extension all produce MPEG-4 Part 2 files.
These categories contain presets for createing QuickTime 6-compatible MPEG-4 movies.
Figure 13: MPEG-4 options in Compressor
H.264 H.264, also known as MPEG-4 Part 10, scales from cell phones all the way to HD DVDs. This is an updated version of MPEG-4 Part 2 that offers four times greater performance increase at any given bitrate. It was introduced in QuickTime 7 and its strength is its scalable performance and efficiency. It’s considered to be twice as efficient as MPEG-2, which means you get roughly the same quality for half the bitrate or twice the quality for an equivalent bitrate. Compressor comes with several presets for H.264 output. These presets have a “(QuickTime 7 Compatible)” extension.
Chapter 6: 24p Output Options | Compressing for Internet Video
215
These categories contain presets for createing QuickTime 7-compatible h.264 movies.
Figure 14: H.264 options in Compressor
Markers When exporting long form material, it’s good practice to include chapters in the QuickTime file. Chapters provide easy bookmarks for users to access sections within the QuickTime movie. Final Cut supports creating chapters in the Edit Marker dialog and exporting them in the Save as QuickTime dialog. Compressor shows markers in the Viewer window and exports them automatically.
Figure 15: Viewing chapters in the Viewer window
To learn more about how to add QuickTime to a web page, visit: http://www.apple.com/quicktime/tutorials/embed.html
Windows Media Windows Media is Microsoft’s digital media format. Like QuickTime, it has gone through several revisions and has many choices for codecs: legacy codecs offer backwards compatibility with earlier versions of Windows and more modern codecs offer increased quality and efficiency. It is both an authoring and distribution format like QuickTime, but authoring is only really done on the 216
Chapter 6: 24p Output Options | Compressing for Internet Video
Windows platform. Nearly all video applications on Windows support Windows Media natively. Windows Media files can be played back on Windows and Mac OS X systems, but the Windows Media player is not installed on Macintosh systems and requires that the user download and install the player from Microsoft. Creating Windows Media in Premiere Pro The Adobe Media Encoder, which is bundled with Premiere Pro does not ship with presets for standard definition 24p material. To encode 24p material at 24p, you need to create an audience setting and store it in a custom preset for the encoder. 1. In Premiere Pro 1.5, select File>Export>Adobe Media Encoder 2. When the Media Encoder dialog appears, choose the “WM9 NTSC 512K download preset” (or
another Windows Media 9 preset) and then follow the steps in the following diagram. A. AUDIENCES Since there are no presets for 24p DV material, click the Audience category to show the audiences for the current preset. B. FRAME RATE The easiest way to set the frame rate to 23.976 it to pick Same as source from the Frame Rate drop down. C. WIDTH & HEIGHT This defaults to 4:3 aspect ratio. If you are working with anamorphic footage, adjust the frame height so that the frame aspect ratio is 16:9.
B A C
D. CLICK OK You will be prompted to save the modifications as a custom preset. Change the preset name to reflect the changes you’ve made and Click OK in the Choose Name dialog.
D
Figure 16: Tweaking the settings for 24p material in the Flip4Mac Windows Media Export Settings dialog 3. In the Save File dialog, choose a location and enter a name for the Windows Media file. Click
OK. Creating Windows Media with Final Cut Pro Creating Windows Media from footage edited on Mac OS X used to be a multistep process: edit using Mac OS X, saving a QuickTime movie, transferring it to a PC, and then encoding the QuickTime movie as Windows Media. It’s become a lot easier with Flip4Mac Studio and Studio Pro from Telestream (http://www.flip4mac.com). To encode footage using Flip4Mac and Final Cut Pro, do the following:
Chapter 6: 24p Output Options | Compressing for Internet Video
217
1. Select a sequence or movie in the Browser window. 2. Choose File>Export>Using QuickTime Conversion. 3. Select Movie to Windows Media from the Format drop down menu. 4. Choose an encoding preset from the Use drop down menu. 5. If you would like to confirm or alter the settings, click Options. Video settings
Audio settings
This is the source frame rate which is progressive as long as you’re shooting 24p.
Size has options for normal and widescreen aspect ratios. The rate should be 23.98 and the final material should be progressive for 24p material.
Figure 17: Tweaking the settings for 24p material in the Flip4Mac Windows Media Export Settings dialog 6. Click Save to begin encoding the sequence as Windows Media.
To learn more about how to add Windows Media to a web page, visit: http://www.macromedia.com/cfusion/knowledgebase/index.cfm?id=tn_15777
218
Chapter 6: 24p Output Options | Compressing for Internet Video
jobs, but in retrospect, I don’t think I would have been happy to specialize in just lighting, or texturing, for instance. I’m happy getting my hands into all the stages in production, and I think given the software available today as well as the DVX-100, there are more people like me out there.
What’s it like to work inhouse vs freelance?
Dan is the Senior Video Producer at Macromedia. He shoots, directs, and produces the video presentations on the Macromedia.com website.
In some ways I think I’m lucky. I get paid to make films all day, and I have a steady pay check which many freelancers don’t have. They’re always hustling to get work. The trade off is the variety one gets as a freelancer and the downtime between projects.
What is your background?
What projects are you doing in 24p?
I’ve been involved in film and video for several years. Before that I was working in IT and I had wanted to a get a job as a full time filmmaker. So I went to film school where I did a lot of student films and shot in 16 mm using Bolex and Arri film cameras.
At work we shoot almost everything in 24p because all of our footage goes out to the web. And 24fps is a really nice frame rate to work with. A lot of computers cannot playback 30 fps video but can handle 24 fps. You can also halve the frame rate to 12 fps which works anywhere. Another bonus is not having to deal with interlacing issues. Granted smaller videos like 320 by 240 are generally fine coming from interlaced video, but larger movies lost best
Case Study: Dan Cowles
What did you do when after graduating? When I graduated, I was a first assistant director, which is a great place to be. You learn about everything because you’re working like a dog.
when the source is progressive.
What led you to work at Macromedia?
Keying in 24p?
I had been making films on the side and this job opened up at Macromedia. I didn’t have corporate experience but I kind of knew a little bit about everything and that’s what they wanted. I knew how to shoot, light, record audio, edit, compress, and get it on the web. I had the breadth they were looking for.
You have to take a lot of care when keying DV. Since we comp against a flat background, it’s not as hard, but it does become more difficult when trying to composite against a complicated background. DV is hard to key because it has the jaggedy edges from 4:1:1 color sampling and the DV codec.
What’s your job like?
The main thing is to light the background evenly. You also want to keep your subject far away from the background so there’s no bounce or spill. With DV you always have to treat the edge--usually with a blur or a choker matte.
I am a one person shop here. I do everything from writing and story concepting stuff with my peers in Creative Services to travelling all around the world with my gear, to editing, adding motion graphics, to compressing it for the web site. From A to Z, I’m doing everything. I had interviewed at a lot of other places, Pixar, ILM, PDI, and I didn’t get those
Chapter 6: 24p Output Options | Compressing for Internet Video
219
Transfering Video to Film In perhaps ten years, there will not be a need to print to film as most projectors in theaters will be digital and will have the resolution required to make current film distribution obsolete. While one can argue that film will always have more resolution than digital, the economics of digital distribution will make this happen. In the meantime, movies are still distributed as reels of film and for the moment, film buyers take projects originating on film or already transferred to film more seriously. Transferring video to film is called many things today: film blow-up, a film-out, upconverting, and uprezzing. There are probably as many unique methods for transferring video to film as there are names for the process. While this section will not cover every process, it does provide an overview of the steps involved and ends with a case study with Eric Escobar, a filmmaker who shot a short film digitally, did a digital intermediate on the desktop, and finished on film. Cost Considerations Rates for a feature length transfer start around $50,000 and go up depending upon the additional post services required to prepare the video for the transfer. Before making such an investment, it is wise to talk with other filmmakers and DPs at festivals and learn from their experiences. It’s also a good idea to call the production house and ask to discuss your project with someone. Most houses now offer a short film out tests using a few scenes. This service can be had for a reasonable cost and any fees can often be applied to the final film out. Since there are so many methods and no real standard process, predicting what will appear on the big screen is difficult. You should pick a facility during pre-production and follow their guidelines for camera set up, lighting ratios, and camera movement during production. Table 5: Price Estimates for transferring 24p progressive video to film
Format
Short Film
Feature Length
DV to 16 mm
$75–200 per minute
$75–200 per minute
DV to 35 mm
$200–600 per minute
$150–500 per minute
HD to 35 mm
$600+ per minute
$525+ per minute
A Film Out Shows Commitment Eric Escobar, an independent filmmaker with two shorts accepted at Sundance, has this to say about doing a film out, “If you send a DVD to a film festival, you can think of it as one of the 5000 other DVDs they’ll receive. If you output your short on film, it will be in a group of maybe a few dozen.” He also added that “films tend be shown in main festival theaters whereas DVDs can be shown anywhere—like a small room in a community center.” You have to ask yourself how, outside of a great story, acting, and production value, can I make my film stand out from the competition? You also have to ask yourself how do you want people to see your project and what is going to give it the best exposure.
Recording Options The three methods for transferring video to film are: the kinescope, the electron beam recorder, and the laser recorder, with the kinescope being the oldest method and offering the least quality and the laser recorder being the newest method and offering the best quality. 220
Chapter 6: 24p Output Options | Transferring Video to Film
Kinescope The kinescope was developed when magnetic video tape did not exist and film was used to store television broadcasts. It’s a pretty simple mechanism, it consists of a film camera that records from a video monitor at 30 fps. Black and white simply records from one monochrome monitor. Color television is recorded using one camera, three monitors for recording red, green, and blue components. These three components are merged onto one negative by the use of a prism. This method is definitely on its way out because it can’t compete with the quality offered by the other two methods. Electron Beam Recorder / Film Recorder An electron beam recorder (EBR) uses an electron beam that exposes through discreet r, g, and b filters onto three seperate film strips. These strips are then used to create a single print. Current generation models put the color component filters in the path of the electron beam and are able to expose onto a single film strip. It is signifigantly higher quality than a kinescope, but doesn’t provide the same level of sharpness as a laser film recorder. Some say that the softness of an EBR is film-like. Laser Recorder Laser recording was mostly used early on to print computer generated imagery (CGI) to film stock so that it could be edited with the live action footage. So they were used for short shots and since they were initially very slow, using a laser recorder was considered very expensive. With increased speed, more powerful desktop computers and demand for taking 24p video to film, the Laser recording is becoming the preferred method for transferring video to film. A laser recorder transfers an image directly to the negative using a laser for each of the RGB color components. The ouput from a laser recorder is sharper than that coming from an EBR and with this quality comes longer rendering time. Some recorders (such as those from LaserGraphics) are becoming very fast and can image a frame in 3 seconds or less.
Delivery and Prep Options Depending upon your experience, equipment, and budget, you can: • Bring your project and master tapes to the facility to master at a higher, lossless resolution before the project is printed to film. • Output 21 minute reels with 2 POPs at the beginning and end of each reel. • Output a digital file and deliver your project to the facility on a hard disk. This option has many permutations since this process is relatively new and most facilities offer different advice based upon their own proprietary methods. Onlining with Master Tapes In this scenario, you deliver a set of master tapes from your final edit. In this scenario, you are probably have done a cuts-only edit, and have only made temporary transitions, titles, and color correction that won’t be used in the onlining session. This is all because you are relying upon the film-out facility to retime, color-correct, resize, deartifact, and title the project before sending it to a film recorder.
Chapter 6: 24p Output Options | Transferring Video to Film
221
To start you will need to give the lab a project file from your NLE or create an edit decision list (EDL), XML, or Advanced Authoring Format (AAF) file from the project file that points to the master tapes and indicates all the cuts. In Final Cut Pro, you select a sequence and choose File>Export>XML or File>Export>EDL. In Premiere Pro, you select a sequence and choose Project>Export Project as EDL or Project>Export Project as AAF. Preparing Reels When you prepare reels, you are printing your project to tape in 21 minute segments. The final edit should have a starting timecode of 01:00:00:00. You then segment the project into 21 minute clips. The signifigance of 21 minutes is that 22 minutes is the length of a reel of film. Feature films are actually long reels where individual 22 minute length reels have been combined. Ideally you create a segment on individual tapes, but you can create multiple segments on individual tapes given room. In any case, you should provide the facility with a log sheet that breaks down each reel by in and out points and describes the scene that occurs at the in and out points. To simplify this combination and to prevent synchronization issues, the segments should start at a scene change where there is a clear cut (night for day for example). If you segment in the middle of a scene, the final print will appear jarring because it will unexpectedly pause between reels. Also since reels are not always printed in the same order, slight color changes that occur at different times could make one scene have two slightly different color casts. Similiarly, you should never segment reels between edits with overlapping sound, because there will be noticeable hiccup as the reels change. At the start of each segment, include a “2 POP,” one frame of bars and tone, two seconds before the first frame of video and or audio. For example, since the first frame of video starts at 01:00:00:00, the 2 POP occurs at 00:59:58:00. Likewise a 2 POP needs to occur two seconds after the last frame of video and or audio. This “tail pop” facilitates as a registration point between reels and ensures a smooth transition. Since DV tape is very fragile, you should consider cloning your original source tapes to high quality tape stock. Delivering Digital Files When going digital, you end up delivering the video-to-film transfer house a large number still images based on the TIF, targa, or Cineon file format. Some houses are now accepting QuickTime movie files as well. These files can be delivered on hard disk, DVD-ROM, and if you have a fast connection, over the Internet onto the facilities FTP site. When editing, avoid any intermediate compression steps because you want the cleanest source for the film blow up and any additional recompression will show up on the big screen. To avoid recompression, use the source tapes or QuickTime reference movies when preparing the final cut for the facility. Do graphics, titles, and animation after the video has been transferred to a higher resolution format at the facility. Alternatively do the titles, animation, and graphics indepently from the edit at a higher resolution. In either case, these will be composited onto the final edit before it is transferred to film and will ensure you get the best quality.
222
Chapter 6: 24p Output Options | Transferring Video to Film
Some filmmakers are beginning to do the Digital Intermediate (DI) on the desktop using a NLE for the edit and a program such as After Effects with Magic Bullet (covered in the previous chapter) to perform the color correction. They will then give the resulting files over to a facility where it will be run through a machine such as a Teranex converter box to retime, scale, and deartifact before going to a film recorder. Other filmmakers are doing everything on the desktop and handing over film-sized files ready for the recorder. Before doing any of these things on your own, work closely with the lab, do tests, and be prepared to spend time and money. When attending festivals don’t be shy to ask fellow filmmakers how they did the film out. To get you started, below is a list of facilities that do tape to film transfers. Table 6: Tape to Film Facilities
Company
Web site URL
Alpha Cine Labs Seattle
http://www.alphacine.com
Cineric
http://www.cineric.com
DuArt
http://www.duart.com
DVFilm
http://www.dvfilm.com
EFilm
http://www.efilm.com
Four Media Corporation
http://www.4mc.com
Heavy Light Digital
http://www.heavylightdigital.com
Metropolis Film Labs
http://www.metropolisfilmlab.com
Monaco Labs
http://www.monacosf.com
Soho Digital Film
http://www.sohodigital.com
Chapter 6: 24p Output Options | Transferring Video to Film
223
it through Magic Bullet, I was asked if I shot on film. Granted, there’s more to it than that, but it surprised me. There’s something magical about the psychology of 24p— it’s as if 24p is this subconscious code that everyone understands as cinema whereas 60i is something we all understand as television.
What projects are you doing in 24p?
Case Study: Eric Escobar Eric Escobar is a filmmaker and teacher. He has had two films accepted at Sundance and is currently writing his first feature-length screenplay.
What is your background? My background is in both traditional filmmaking as well as software development. I started making films when I was 15 years old with friends in high school. We would shoot with VHS and 8mm camcorders. In college, I created political activist videos. After graduating, I did educational and political documentaries for five years. I then got more interested in the technical side of filmmaking and worked at Adobe Systems as a technical support representative for Premiere and After Effects, and then I moved to New York where I was an engineer at a post house. I moved back to California to join the Final Cut Pro development team to work on Final Cut Pro 3 and 4. Now I’m teaching filmmaking, making my own short films, writing a feature, and I direct commercials and music videos full time.
What got you interested in 24p? 24p has been the holy grail for independent filmmakers working with video because it has the same frame rate as film and it is easier to transfer to film. I’ve been excited about 24p from the very beginning. I was a beta tester for Magic Bullet and my first film at Sundance was a film shot in 60i and run through Magic Bullet. When I would show them this, they would ask me if it was a documentary, and after running
224
I’ve done a lot in 24p. I have shot two narrative short films that have played in film festivals around the world. I’ve worked on television documentaries that were shot in 24p. I’ve shot music videos in 24p, and recently directed a television commercial that was shot in 30p.
Describe your personal projects using 24p. My most recent project, One Weekend a Month, is about a single mother of two children who receives a call from her best friend telling her that their National Guard Unit has been activated for duty in Iraq. She has to figure out what she is going to do with her children and her life now that she’s been activated for duty. It is a 12-minute film, shot in one day, using the Panasonic Varicam. While it began as a 720p HD project, it was shown at festivals as a 35mm film. The project was a proving ground for me as I did the coloring and preparation for the film on my computer.
Describe the process you used for this film. I shot in DVCProHD, captured using the Panasonic AJ-1200 over Firewire, and brought it into Final Cut Pro. My editor edited a downcoverted DV resolution proxy on his PowerBook which I relinked to my HD masters. I exported a QuickTime reference file and imported it into After Effects where I used The Orphanage’s eLin plug-in. This allowed me to work in film color space where I was able to push the footage’s white and black values into areas that are only reproducible on celluloid. When I was finished, I output a series of image sequences in Cineon format. These files were then scaled using a Teranex and output on a Arri laser film recorder.
Chapter 6: 24p Output Options | Transferring Video to Film
Glossary 1.33:1 – Standard aspect ratio where there are four horizontal units for every three vertical units. Also referred to as 4:3 or 4 × 3. Most standard television screens have an aspect ratio of 1.33:1. 1.78:1 – Widescren aspect ratio where there are 16 horizontal units for every nine vertical units. Also referred to as 16:9 or 16 × 9. 4:1:1 – A color sampling ratio where all the luminance information and but one-quarter of chrominance are recorded. 4:2:0 – A color sampling ratio where all the luminance information and but one-quarter of chrominance are recorded. 4:2:2 – A color sampling ratio where all the luminance information and but half of chrominance are recorded. 4:2:0 differs from 4:1:1 in that it does not 4:4:4 – A color sampling ratio where there are equal parts luminance for the two channels of chrominance. 16:9 – A display format for a wide screen monitor. 24 fps – A format used exclusively for film applications. Film is typically shot and projected at a rate of 24 fps, so this SMPTE time code format is useful when working with content that originated on film. 24p (24 progressive) – Video that resembles film at 23.976 fps (also labelled as 23.98).
0 and 1 of the first second in each minute are “dropped” or not counted, unless the total number of minutes is a multiple of ten. 32 kHz – A lower-quality audio rate supported by DV cameras. Four tracks of 32 kHz audio. 44.1 kHz – The frequency used in recording CD PCM audio. 48 kHz – The frequency used in recording DV audio. 1080i – 1080 lines of interlaced video (540 lines per field). Usually refers to 1920 × 1080 resolution in 1.78 aspect ratio. 1080p – 1080 lines of progressive video (1080 lines per frame). Usually refers to 1920 × 1080 resolution in 1.78 aspect ratio. 720p – 720 lines of progressive video (720 lines per frame). Also referred to as 1280 × 720. 180 degee rule – Assuming a line is drawn through the center of action, the 180 rule states that for any two consecutive shots, the position between the two cameras does not exceed 180 degrees. It’s also referred to as “preventing the camera from crossing the line.” 2:3 pulldown – An uncommon variation of 2:3 pulldown where the first film frame is repeated for three fields instead of two. 2:3:3:2 pulldown– This refers to 24p advanced (24pa) pulldown. Advanced pulldown employs a 2:3:3:2 cadence for completely recreating original frames by spreading four progressive frames over five interlaced frames. The third interlaced frame is a jitter frame and is usually discarded when pulldown is removed during capture.
A 25 fps – The frame rate used for PAL, the European video standard. 29.97 Drop Frame – Drop frame timecode originated with the introduction of NTSC color televsion. To maintain compatibility with existing black-and-white television, the frame rate was adjusted slightly. In order to keep synch with real time, frames Glossary: Producing 24p Video
ADR(Automatic Dialog Recording) – The process of recording audio after principle photography has taken place to replace dialog in a shot or provide off-camera audio. In some cases, talent is asked to lip synch to a shot.
225
Advanced Pulldown – See 2:3:3:2 pulldown. Anamorphic – A lens that squeezes a wide image to conform to the dimensions of standard frame width. Later, the anamorphic lens on the projector unsqueezes the image. Aperture – The open area of the iris that controls the amount of light admitted into the camera.
Bins – A term dating back to the time when film and tape were stored in physical bins. In most NLEs, folders used to organized clips are called bins. Black Wrap – Black anodized aluminum foil used to control spill light or shape a light beam. Blocking – A plan for the action on the set. Critical points in the action are marks.
Apple Box – A box, usually made of pine, that comes in three sizes (quarter, half, and full). During production, apple boxes are used Blondie – A portable 2000-watt lamp. for standing on or propping up equipment. Art Director – The crew member responsible for the design of props, sets, and costumes. The art director works closely with the director during preproduction in designing a film’s look. Aspect Ratio – The ratio of the width to the height of the film or television image. Assitant Director – The crew member responsible for breaking down a script, creating call sheets, and maintaining quiet on the set, among other production management duties. Automatic Gain Control – A control circuit found on many audio recording devices that attempts to boost low audio and clip hot audio.
Blue Screen – A pure blue evenly-lit background for a process in which the background is rendered transparent so that a new background can be placed behind the subject. Boom – A travelling arm for suspending a microphone above the actors and outside the frame. B-Roll – B-roll includes secondary clips meant to establish a shot, cut to for a transition, or provide a backdrop for narration. Brute – A 225-amp carbon arc lamp with a 24-inch fresnel lens.
C C-47 – A clamp used to attach gels or diffusion material to the barn doors of a light.
B
CCD (Charged Coupled Device) – An electronic component behind Backlight – Lighting positioned directly behind and above the the lens that records color and light information. subject aimed at the subject’s back. It’s best to avoid having the backlight shine into the lens of the camera, since this might create flares. Chicken Coop – A light box, usually with six silvered globes, that bounces a soft, shadowless light directly from above. Barn Doors – Hinged metal flaps on the front of a light used to control the spread of light. Best Boy – The gaffer’s assistant.
Chimera – A box with fabric panels that is placed over a light (usually an open faced light) to soften the light. It is usually placed on the key light. China Lantern – A collapsible wire and paper ball with a light source suspended in the middle that provides a soft light result.
226
Glossary: Producing 24p Video
Chroma Key – A technique used to make a color, usually blue or green, transparent in an image. Cinematographer – Also known as the Director of Photography. This person is responsible for the camera and lighting, thus the quality of the image.
Coverage – The number of shots planned for a project. Good coverage is when you have a wide variety of framing and vantage points from which to choose. Craft Services – The services, primarily catering, provided to the crew and talent during production.
Cinematography – Motion picture photography.
Crane – A vehicle used for shooting aerial shots. It’s a large mechanical arm that holds a camera, a cinematographer, and an assistant or two. It is raised to create dramatic shots where the Close-up (CU) – A reference for shot size. When applied to a face, camera moves in from above or far away for a closeup or moves the top of the frame rests just above the head and the bottom cuts away from the subject. at the base of the neck. Closed Framing – When the subject is contained within the frame. Codec – A digital compression and decompression procedure for encoding audio or video.
C-Stand – A lighting stand with staggered l-shaped folding legs. A c-stand’s base can be stabilized by placing sand bags over the legs. Cucaloris – A cut out pattern used to break up light into a dappled pattern.
Color Sampling – The rate at which discreet color channels (RGB Cue Sheet – A document listing music used in a production in or YCbCr) is sampled. In the case of images that are in YCbCr order to acquire performance rights. color space, it’s noted in the form of x:x:x, where the first number is the number of luminance samples and the last two are the number of Cb and Cr samples. Cutaway – A shot inserted into a scene to show action. This technique is often done in documentaries where interviews are broken up with cutaways to show action and keep viewers interested. Component – The individual signals from the red, blue, and green channels, as well as the luminance signal.
D
Composite – A low-quality output signal where color is combined with the luminance signal.
Depth of Field – The range of distances from the lens in which objects are in focus.
Compositing – The process of overlaying one image over another. The foreground image has semi- to fully transparent areas that al- Diffuse Light – Soft light created by placing a diffusion material in low the subject to appear as part of the background image. front of a light. Conforming – The process of converting to a new frame rate. PAL Dimmer – A device for varying the voltage supplied to an instrufootage has a frame rate of 25fps and is often conformed to 24fps ment in order to change its intensity. by slowing it down. Director of Photography – See Cinematographer. Continuity – Refers to the consistency in reality across several sequential shots. The continuity director maintains a notebook to keep track of things in the shots.
Glossary: Producing 24p Video
Dissolve – The superimposition of a fade out over a fade in. Sometimes called a lap dissolve.
227
Distribution – The dissemination of media. Also, the stage between production and exhibition.
Encoding – The process of converting footage from one codec to another.
Dolly – A wheeled platform for moving a camera during a shot. The dolly can be rolled over free ground or along rails for smoother motion. A dolly produces three types of movement: dollying, tracking, and arcing. Dollying refers to direct movement towards or away from the subject. Tracking refers to movement to the left or right of the subject. Arcing pivots the camera around the subject.
Extreme Close-up – More magnified than a closeup. Examples are a shot of a hand, eye, mouth, or subject of similar detail.
F Fill Light – A light used opposite the key to provide a lower level of illumination in shadowed areas.
Dolly Grip – The operator of the camera dolly. Double (scrim) – A net that reduces the light by a full stop. Dropframe – A way of counting timecode so that frame numbers stay in sync with real time. No actual frames are dropped in this process.
Film Transfer – The option to transfer the movie from a 16:9 aspect ratio to the DVDs 4:3. Filter – A piece of optics applies to a lens to cut light or embue diffusion or a gradient over the final image. Final Cut – The film in final form as it will be released.
DVCAM – Sony’s professional digital video standard. It is a variant of MiniDV. Finger – Small rectangular net or flag used to control hot spots or cast shadows. DVC-Pro – Panasonic’s line of professional video standards. It scales from DVCPro 25 (miniDV), DVCPro 50 (4:2:2 SD), and DVCPro 100 (4:2:2 720p HD and 1080i HD).
E Editor – The person who determines the narrative structure of a film. EDL – Edit Decision List. A text-based format for describing a sequence of edits performed on one or more clips.
Finishing – The process of applying color correction, upconverting, and mastering a project for broadcast or film. FireWire – A standard for transmission of digital data between external peripherals, including consumer audio and video devices. The official name is IEEE 1394, based on the original design by Apple Computer. Fluid Head – A tripod head that enables smooth camera tracking. Focus – The sharpness of the image.
Egg Crate – A deep crate that controls soft light into a directional beam rather than spreading out. The deeper the crate, the more Frame Rate – The frequency of discrete images that are measured control it provides. in frames per second (fps). Film has a rate of 24 fps, but must be adjusted to match the display rate of a video system. Electric Image Stabilization – An electronic mechanism that stabilizes motion within the frame by tracking motion, removing it, and cropping it to the stabilized result. It is not as good as true optical stabilization that uses a physical gyroscope.
228
Glossary: Producing 24p Video
Frequency Response – An audio system’s ability to equally respond to a range of frequencies within a specified tolerance.
Fresnel – Typically, a spotlight. Fresnels employ a fresnel lens, which has a concentric pattern of stepped and angular grooves. F-Stop – A fraction of the focal length of a lens over its aperture. It expresses an approximation of the amount of light that is allowed through the lens. Smaller numbers equate to more light. For example, a lens with a focal length of 50mm and an aperture of 25mm is F2 since 50 divided by 25 is 2.
G
H Hair Light – A backlight positioned above and just slightly behind a subject to create highlights on the hair. Handheld – This refers to a shooting style where the cinematographer is holding the camera with little or no support. It is more common now that there are many light weight digital video cameras.
Gaffer – A lighting designer or head lighting technician.
Head Room – The difference between nominal and 0 dBFS, or full scale less a safety margin.
Gaffer Tape – A two-inch wide fabric tape with a very strong adhesive.
HMI – (HMI) Hydrargyrum medium-arc length iodide. HMI’s are lighting instruments with the same color temperature as daylight.
Gain – The amount of amplification applied to boost audio or video levels.
I
Gamma – A measure of the midrange contrast of a picture.
IEEE-1394 – Also known as FireWire, this is a standard for transferring data between audio and video devices and computer equipment.
Gel – A transparent, colored plastic sheet (usually polyethylene) used to change the color of light. In early theater the sheets were made of colored gelatin, hence the name.
Interlace – A video scanning system in which alternating lines are transmitted, so that half a picture is displayed each time the scnning beam moves down the screen.
Globe – The shape of the icon on which is superimposed the DVD regional code. More than one may be listed. Iris – The adjustable opening behind the lens that controls the amount of light admitted through the camera. Gobo – A large flag, cutter, or full-sized flat used to cast a shadow on part of the set. The origin of this term comes from early directors shouting, “Go black out.” Greenscreen – A pure green, evenly lit background. It is made transparent in a compositing application so that a different background can be placed behind foreground elements in front of the greenscreen.
J Jib – A long pivoting arm usually mounted on a tripod for raising a camera up or down to produce a crane shot. Junior – A 2,000 (2k) watt fresnel spotlight. It is sometimes mistaken as a 1k watt fresnel spotlight.
Grip – A person who handles set rigging, camera support, sometimes even flags, nets, and silks.
K
Grip Truck – A truck or van equipped with sand bags, lighting stands, crates, and other elements that support film production.
Key Grip – The crew member in charge of all the grips on the set. A grip is an assistant to the cinematographer and helps operate
Glossary: Producing 24p Video
229
camera support equipment and may help the gaffer, or lighting director, position lighting equipment and accessories.
Mixer – An audio device used for mixing discreet sources of sound and for maintaining proper audio levels.
Keying – The process of removing a color background from a foreground shot for compositing over a background shot.
N
Kicker – A light that is positioned behind the subject and off to the side, usually opposite the key light source. Knee – The top-right portion in a camera’s response curve that pertains to the highlights. By pressing down on the knee, over exposure can be prevented.
L Latitude – The camera’s ability to handle contrast, or the range between overexposure and underexposure. Lavelier – A tiny mic, either wired or wireless, that can be mounted near a subject’s mouth.
NTSC (National Television Standards Committee) – This refers to the television signal standards set by that committee and used in the United States, Canada, as well as others in the Western Hemisphere, and Japan. Neutral Density (ND) – A gel or lens filter that reduces light transmission without coloring the light. Nose Room – When framing a subject, nose room refers to the space between the edge of the frame the subject is facing and the subject’s nose. It is also referred to as “looking room.” More nose room is considered better for two reasons: little nose room makes the subject looked trapped (and you can use this to advantage when needed) and more nose room better prepares the audience’s eye for the next shot.
O Letterbox – The final visual effect of applying black horizontal bars to the top and bottom of the display area in order to create a frame for displaying the video that has a different aspect ratio. This process preserves the entire video picture. Long Shot – A long shot includes at least the full figures of the subjects, usually more.
Omnidirectional – A microphone with a pick-up pattern that records sound with the same sensitivity in all directions.
P PAL (Phase-alternating line) – The television standard used in most of Europe and the countries formerly under their control.
M Martini Shot – The last shot of the day. Matte Box – A box placed over the lens to block glare. Matte boxes may also include stages for adding neutral densitry or gradient filters in front of the lens.
Pan – A horizontal move of the camera, usually done by rotating the camera with a pan and tilt head.
Medium Shot – A shot between a closeup and a full shot.
PAR (Pixel Aspect Ratio) – The ratio of width to height for an invidual pixel in a frame of video. Since some video formats such as NTSC standard and NTSC widescreen have retangular pixels, the pixel aspect ratio is another way to describe the frame’s overall aspect ratio.
Mise-en-scene – The design of an entire shot in time as well as space.
Pedestal – Video black level set at 7.5 IRE. Used in most NTSC countries. Also known as Setup.
230
Glossary: Producing 24p Video
Polar Pattern – A shape representing the pick-up pattern for a microphone.
RGB – A color space where video is described in the form of red, green, and blue values. The combinations of these three values represent the entire range of visible light.
Postproduction – After principle photography is near complete, postproduction begins in earnest and sometimes it begins as pro- Rim Light – A spot light pointed at the back of a subject from duction is underway. Editing, visual effects, and mastering is done the side to create a highlight along the edge of the subject. A rim in Postproduction. light creates additional separation between the subject and the background. Pre-production – The planning stage that takes place before video and film production. Production Designer – See art director. Profile – In MPEG-2, these specify syntax and processes, for example, picture types, scalability, and extensions.
Room Tone – The particular quality of sound in a location without the subject’s voice. A minute of room tone should be recorded at every location in case additional ADR work or sound effects need to be added.
Pull Focus – To refocus during a take or change the focus plane.
Rotoscoping – A special effects technique that involves creating mattes for a subject frame by frame. Today, rotoscoping is accomplished digitally by using spline-based curves and painting mattes in a compositing application.
Q
Rough Cut – The first assembly of scenes in the editing process.
QuickTime – A cross-platform, digital video architecture developed by Apple Computer.
Rule of Thirds – A guideline that assists in framing a shot. It begins by dividing the shot’s composition into nine equal parts using two equally-spaced horizontal guides and two equally-distant vertical guides. When framing a shot, subjects are placed along these guides and ideally at their intersection. This style of framing is considered more dynamic and interesting than simply centering the subject within the frame.
R Rack Focus – A technique that involves shallow focus (shallow depth of field) to direct the attention of the viewer from one subject to another. Focus is pulled, or changed, to shift the focus plane, often rapidly, and sometimes even several times within the shot. Reaction Shot – A shot that cuts away from the main scene in order to show a character’s reaction to it. Rear Pick-up – The amount of sound that is picked up at the rear of a microphone. Directional microphones should have very little to no rear pick-up. Reverse Pulldown – The process of removing pulldown from video orignating from film. This converts interlaced video back to progressive video with a frame rate of 23.976.
Glossary: Producing 24p Video
S Scene – A complete unit of film narration. A series of shots (or a single shot) that takes place in a single location and deals with a single action. Scrim – An opaque plate placed in front of a light in order to cast a particular shadow, usually to simulate natural lighting. Script Breakdown – A preproduction activity where the assistant director analyzes the script to develop a shooting schedule, develop prop lists, call sheets, and reports that are used by various crew members.
231
Second Sound – A second system used to record. In film production, sound is recorded separately from the film. Second sound can also be used in video production to serve as a backup or to pick up additional discreet tracks of sound. Selects – Clips in an NLE project that are marked as being good for a particuliar theme. For example, a food documentary might have a selects bin for clips showing organic farms. Setup – A camera and lighting position. Shooting Ratio – A ratio between the amount of footage shot
Sound Check – Testing the equipment before the session. Sound Recordist – The crew member responsible for recording audio on a set. Stereo Microphone – A microphone used for recording spatial sounds such as backgrounds, large sound effects, and music. Rarely is it used for dialog.
T Telecine – The process (and the equipment) used to transfer film to video. The telecine machine applies 2-3 pulldown by projecting film frames in the proper sequence for video camera capture.
Shooting Script – a script that has each scene numbered, camera angles noted, and directorial notes made. The shooting script is used heavily in production by the script supervisor and continuity Tilt – To rotate the camera vertically using a pan and tilt head on director. a tripod. Shot List – A schedule listing the shots that are needed to complete a film. The shot list is usually structured by location and not by time for the sake of efficiency. Shotgun Microphone – A directional microphone used often to record dialog for narrative and documentary film. Usually it is best used while mounted on a boom pole and placed over the subject as closely as possible without entering the frame. Shutter – The device that opens and closes an aperture on a camera or projector.
Timecode – A reference used for both video and sound in order to stay synchronized. Title – The largest unit of a DVD-Video disc, which could be a movie or TV program. Track – Any one of possibly several separate parallel recording channels on tape that can be played together or separately and later mixed or modified. Transcoding – See encoding.
Single (scrim) – A net that reduces the light by a half stop. Tripod – A three-legged stand for a camera. Slate – The equipment used to initiate a video capture that enables the synchronization of video and sound.
U
Soft Box – A cloth and wire umbrella-like contraption that holds diffusion material in order to change lighting.
Up Convert – The process of enlarging video to a higher resolution such as HD or film.
Sound Blanket – A heavy, padded cloth usually of six-square feet that is hung around a shooting location to absorb extra unwanted sound.
Up Res – See up convert.
232
Glossary: Producing 24p Video
V Vectorscope – An instrument for measuring the chroma and hue of the video signal.
W White Balance – A procedure for adjusting the camera to recognize a specific color temperature as white. Wide Shot – A shot in which the subject (or the relative size of a person if the shot has no one in it) is very small. There are a number of wide shots that frame the subject at different proximities, but in all cases the subject fits entirely in the frame with some room on top and bottom. Windows Dub – A copy of a source tape that has burned in timecode. The copy is used for transcribing the footage so that the editor can produce a paper edit. It is best used in documentary productions with high shooting ratios.
X Y Z Zoom – A shot using a lens with a focal length that is adjusted during the shot. Zebra Pattern – An overlay of diagonal hash marks shown in the camera viewfinder or LCD panel that flags overexposed areas. Zeppelin – A shield that controls sound.
Glossary: Producing 24p Video
233
This Page Intentionally Left Blank
Index 1.33:1 aspect ratio, 225 1.78:1 aspect ratio, 225 2:3 pulldown, 225 2:3:3:2 pulldown, 225 4:1:1 color sampling ratio, 225 4:2:0 color sampling ratio, 225 4:2:2 color sampling ratio, 225 4:2:4 color sampling ratio, 225 16:9 display format, 225 24 fps format, 225 24p video. See also video advanced mode, 31, 32–33, 172 compressing for DVD, 200–207 history of, 2–3, 28–29 output of. See distribution overview, 28–33, 225 standard mode, 31–33, 172 working with, 171–178 25 fps frame rate, 225 29.97 drop frame timecode, 225 29.97 timecode, 225 30-degree rule, 84 32 kHz audio rate, 225 44.1 kHz frequency, 225 48 kHz frequency, 225 180-degree rule, 84, 225 720p video, 225 1080p video, 225
A acoustics, 70 action safe zones, 118–119 actors. See also cast members auditioning, 67–68 camera position of, 105 costumes, 60 finding, 66–67 makeup, 61 Adobe Media Encoder, 217–218 ADR (Automatic Dialog Recording), 160, 225 After Effects, 187–192 Index: Producing 24p Video
Almodovar, Pedro, 75 analog to digital process, 21–22 analog video, 20–21 anamorphic lens, 77, 226 aperture, 90–91, 226 apple boxes, 225 Apple Pro Video applications, 203–206 applications. See software archiving work, 169 arcs, camera, 97 art director, 226 artifacts, 28, 36, 113–114, 186 aspect ratios, 16–17, 25, 26, 125, 225, 226 assistant director, 226 audience, 3 audio, 141–160. See also sound distortion, 154, 156, 158 fixing, 142 frequency, 144–145 importance of, 142–144 location acoustics, 70 microphones. See microphones mixing, 155–157 monitoring, 154–158 noise, 70, 146–150, 152, 153 recording, 17, 24, 154–160 audio bit rates, 207 audio equipment, 143–144 audio levels, 154 audio rates, 225 auditions, 67–68 author, 60 Automatic Duck Pro, 191 automatic gain control, 226
B B-roll, 226 backdrops, 115 backlight, 226 backups, 168 barn doors, 226 batteries, 112 235
best boy, 226 Betacam SP, 34 bins, 166–167, 226 bit budgeting, 202–203 bitrates, 201, 202–203 black wrap, 226 blocking, 226 blocking shots, 72 blondie lamp, 226 blue screen, 114–117, 226 Bonjour, Jean-Paul, 43, 65, 71, 79 boom, 226 boom pole, 63–64, 78, 143, 152–153 boom pole operators, 63–64, 142, 148, 152–153 brightness, 137 brute lamp, 226 budgeting, 45–47, 52. See also financial issues
C C-47 clamp, 226 c-stand, 78, 227 cables, 110–111, 144, 149–150, 155 camera accessories, 77–78 camera angles, 84, 104–109 camera blocking, 72 camera craft, 109–119 camera settings, 184–185 camera shots. See also shooting video blocking shots, 72 close-up shots, 102–104, 227 crane shots, 77, 98, 227 cutaway shots, 227 detail in, 113, 117, 134–135 handheld shots, 98–99, 229 long shots, 101, 230 martini shot, 230 medium shots, 102, 230 purpose of, 82–85 reaction shot, 231 sharpness of, 86, 113 sizes, 101–109 test shots, 76 tips for, 82–85 types of, 85 camera stabilizers, 99–101 236
Index: Producing 24p Video
cameras. See also video cameras care of, 112 digital, 6, 71, 74 film, 8–17 production tips, 112 recommendations, 5 supporting HDV, 37–38 Canon XL2 camera, 5, 41 capture process 24p considerations, 171 Final Cut Pro, 172–177 overview, 165–166 Premiere Pro, 177–178 car mounts, 100–102 cast members. See also actors; crew auditioning, 67–68 compensation, 45 finding, 66–67 insurance for, 54 rehearsals, 75–76 releases for, 55 casting, 66–71 described, 66 for documentaries, 68–70 following up on, 67–68 recording sessions, 67 catering services, 65, 227 CCD (charged coupled device), 19–20, 117, 226 CD-R media, 203 chimera box, 226 chroma key, 227 chroma level, 135 chroma phase, 135–136 cinematographer, 82, 227 cinematography, 81–140 24p and, 82 camera craft, 109–119 camera movement, 92–101 custom presets, 133–140 described, 82, 227 documentary-style, 119–121 focal length, 87–92 focus, 86–87, 228 interviews, 121 scene files, 133–140
shots. See camera shots video engineering, 121–131 zooming. See zooming in/out clapper, 143, 158–159 clipping, 157 clips, 167–168 close-up shots, 102–104, 227 CMOS image sensors, 20 CMYK color, 128 codecs, 113–114, 200, 208–209, 227 collaboration, 73, 169 color CMYK, 128 resolution, 128–129 RGB, 21, 128, 129, 231 temperature, 129–131, 136 color bars, 124–125 color models, 126–129 color resolution, 128–129 color sampling, 21–22, 225, 227 color spaces, 21 color temperature, 129–131, 136 color wheel, 126–127 component, 227 composite, 227 compositing, 115–116, 227 composition, 83 compression, 200–207 24p for DVD, 200–203 blue/green screens, 115 interframe, 23, 201 Internet video, 207–219 intraframe, 23, 201 MPEG, 200–203 overview, 22–23 production and, 23 video for Internet, 207–208 compression ratios, 23 comps, 189–191 computer system, 5–6 conforming, 227 continuity, 74, 85, 114, 121, 227 continuity supervisor, 62, 74, 114 converting video film to video, 27–28 Index: Producing 24p Video
Final Cut timeline as XML, 191 HDV to 24p, 180 interlaced to 24p, 179–197 copyright protection, 53, 54–56 cost considerations. See financial issues costume designer, 60–61 coverage, 227 Cowles, Dan, 219 craft services, 65, 227 crane shots, 77, 98, 227 crew. See also cast; specific job titles compensation, 45 finding, 57 insurance for, 54 job descriptions, 57–65 postproduction, 163–164 preproduction, 57–66 rehearsals, 75–76 releases for, 55 reporting structure, 66 cropping, 106, 211 Cucaloris, 227 cue points, 211 cue sheet, 227 curves, 193–195 cutaway shots, 227
D dailies, 170 DaSilva, Alex, 70 DAT recorders, 17, 159–160 data rate, 22 depth of field, 13, 88–92, 117, 227 Detail Coring, 135 detail level, 113, 117, 134–135 DI (digital intermediate), 9, 30, 223 diffuse light, 227 Digital Betacam, 34 digital cameras, 6, 71, 74. See also cameras digital files, 222–223 digital imaging technician, 122 digital intermediate (DI), 9, 30, 223 Digital Signal Processor (DSP), 20–23 digital video, 20–22, 33–38. See also video digital video format. See DV 237
digital video recorders (DVRs), 39, 40–41 dimmer, 227 direct-to-disk recording, 39 director, 57–58, 73 director of photography (DP), 59, 227 disc size, 203 disk space, 208 dissolve, 227 distortion audio, 154, 156, 158 video, 12, 88, 102 distribution, 199–224 compressing 24p for DVD, 200–207 compressing for Internet video, 207–219 delivery/prep options, 221–223 described, 44, 228 tape to film facilities, 223 transferring video to film, 220–223 documentaries casting considerations, 68–70 cinematography, 119–121 editorial strategies, 167, 170–171 equipment for, 76 interviewing styles, 120–121 stories, 69–70 documentary subjects, 55, 68–70 dollies, 77, 94–97, 228 Dreamweaver, 213–214 dropframe, 228 DSP (Digital Signal Processor), 20–23 DV codec, 113–114 DV (digital video) format, 20–22, 33–38 DV Rack, 40–41 DV tape. See also tape archiving, 168 blanking, 110 labeling, 110, 164–165 master tapes, 221–222 naming schemes, 110 DVC-Pro standards, 34, 36, 228 DVCAM standard, 34, 228 DVD-5 media, 203 DVD-9 media, 203 DVDs burning, 6 238
Index: Producing 24p Video
companion to book, 5 compression, 200–207 vs. film, 220 DVRs (digital video recorders), 39, 40–41 dynamic range, 147
E Eames, Charles and Ray, 96 EBR (electron beam recorder), 221 editing process. See also postproduction archiving work, 168 collaboration, 169 documentaries, 170–171 labeling tapes, 110, 164–165 learning about, 164 logging styles, 165–166 naming clips, 167–168 strategies for, 164–171 working with 24p material, 171–178 working with bins, 166–167 editors, 64–65, 163, 228 EDL (Edit Decision List), 222, 228 effects supervisor, 65 egg crate, 228 electric image stabilization, 228 electromagnetic noise, 156 encoding, 228. See also codecs equipment acquiring, 76–79 camera accessories, 77–78 organizing, 110–111 packing, 111 quality of, 117–118 recommended, 5 equipment checklist, 77–78 Escobar, Eric, 220, 224 expendables, 78 exposure, 8, 113–114, 118, 123–125, 132 eye line, 107–108
F F-stop, 229 field monitor, 78, 153 file formats, 169. See also specific formats files
delivery of, 222–223 naming, 169 scene, 133–140 sharing, 169 fill light, 228 film. See also tape converting to video, 27–28 development of, 9, 15 exposing, 8 frame rates, 10 overview, 8–17 sizes, 14–15 transferring video to, 220–223 vs. video, 122 film cameras, 8–17 film finishing technician, 163 film recorder, 221 film sound recording, 17 film transfer, 228 filters, 11, 77, 78, 183–185, 228 final cut, 228 Final Cut Pro Nattress Film Effects, 192–197 setting markers with, 203–204 working with, 172–177 financial issues budgeting, 45–47, 52 cost-saving tips, 47 location cost, 70 transferring video to film, 220 finger, 228 finishing, 228 FireWire standard, 228 Flash video (FLV), 208–214 fluid head, 92, 228 focal length, 87–92 focus, 86–87, 123, 131–132, 228 focus ring, 13 follow focus, 13, 77, 86 food services, 65, 227 formats, 169. See also specific formats frame rates, 10, 140, 207, 228 frame size, 207 framing, 106, 108–109, 125, 227 frequencies, 144–145, 225 Index: Producing 24p Video
frequency response, 144–145, 228 fresnel spotlight, 229, 233 fundraising, 46–47 FX artist, 163
G G Film plug-in, 192–193 G Nicer, 196–197 gaffer tape, 229 gaffers, 61, 229 gain, 156, 229 gamma, 137–138, 229 gamut, 129 gear. See equipment gel, 229 globe, 229 Gobo, 229 GOPs (groups of pictures), 36, 201–202 Gorilla Film Production, 51–52 grants, 46–47 green screen, 114–117, 229 grip truck, 78, 229 grips, 61–62, 229
H H.264 codec, 200, 214–216 hair light, 229 handheld microphone, 148 handheld shots, 98–99, 229 hard disk recorders, 159–160 hardware, 5–6 hardware DVRs, 40 HD (High Definition) format, 28–30, 34–36 HDCAM format, 35 HDV (High Definition Video) format, 36–38, 180 head room, 106, 229 headphones, 143 high definition formats. See HD; HDV HMI (Hydrargyrum), 229
I I/O (input/output), 24 icons, 4 IEEE-1394 standard, 229
239
impedance, 147 insurance, 53–54 interframe compression, 23, 201 interlaced video applying film-look to, 186 converting to 24p, 179–197 described, 225 interlacing, 26–28, 200, 229 Internet video, 119, 207–219 interviews, 120–121 intraframe compression, 23, 201 IRE (Institute of Radio Engineers), 122–123 iris, 132, 229
J jib, 77, 229 judder, 93 junior, 229
K key grip, 229–230 keying, 230 kicker, 230 kinescope, 221 knee, 138, 230
liability insurance, 54 light box, 226 lighting blue/green screens, 115 color temperatures, 129–130 tips for, 70, 118 lighting accessories, 78 lighting kits, 78 limiters, 157 line level, 154 location releases, 55–56, 71 locations acoustics, 70 available light, 70 cost considerations, 70 permits for, 56 scouting for, 70–71 security considerations, 56 taking photographs of, 71 logging, 165–166, 171. See also capture process long shots, 101, 230 lossless codecs, 23 lossy codecs, 23 Lucero, Anthony, 198
M L labeling items, 110, 111, 164–165 laser recorder, 221 latitude, 230 lavelier microphone, 149–150, 230 lead room, 107 lenses anamorphic, 77, 226 film cameras, 11–13 focal length, 87–92 normal, 88 prime, 12, 13, 19 settings, 183–184 telephoto, 88, 89, 90 video cameras, 18–19 wide-angle, 88, 90 zoom, 12–13, 18–19, 89 letterbox effect, 16, 230
240
Index: Producing 24p Video
Macintosh systems, 5–6 Magic Bullet Editors, 181–191 magnetic tape, 38–42. See also tape makeup artist, 61 markers Final Cut Pro, 202, 203–204 Premiere Pro, 206–207 QuickTime files, 216 martini shot, 230 Master Pedestal, 136–137 master tapes, 221–222 Matrix settings, 138–139 matte box, 10, 11, 77, 78, 230 medium shots, 102, 230 memory, 6 memory recording, 39 microphone level, 154 microphones characteristics of, 144–148
considerations, 5, 78 dedicated vs. on-camera, 142 handheld, 148 lavelier, 149–150, 230 noise, 146–150, 152, 153 recommendations, 143 shotgun, 150–153, 232 stereo, 232 stick, 150–152 types of, 148–153 Mise-en-scene, 230 Misfire plug-in, 186 mixers, 5, 78, 143, 156–157, 230 mixing sound, 155–157 monitors, 5 MPEG-4, 214, 215, 220 MPEG standards, 36, 200–203, 215 music, 53, 54–55
N narrative films, 76, 119, 167, 169–170 Nattress Film Effects, 192–197 networking, 57 neutral density (ND), 114, 183, 230 noise audio, 70, 146–150, 152, 153 electromagnetic, 156 location, 65, 70 microphone, 146–150, 152, 153 pop, 147–148 set, 158 sound blanket for, 143, 152, 232 wind, 147, 149, 150 nose room, 106, 230 NTSC (National Television Standards Committee), 24–28, 172, 202, 230
O OIS (Optical Image Stabilization), 93 omnidirectional, 230 organizational chart, 66
P P2 format, 38
Index: Producing 24p Video
PAL (phase-alternating line), 24–28, 202, 230 Panasonic DVX100 models, 5, 41–42 panning, 92–93, 109, 230 PAR (pixel aspect ratio), 230 pedestal, 96, 136–137, 230 permits, 56 pixels, 25–26 pointing room, 107 polar patterns, 145–147 pop noise, 147–148 post settings, 185 postproduction, 161–198. See also editing converting interlaced video to 24p, 179–197 crew for, 163–164 described, 44, 231 high-level overview of, 162–163 working with 24p material, 171–178 postroll, 109 power, 71 pre-interviews, 68, 70 Premiere Pro, 206–207 preproduction, 43–79 acquiring equipment, 76–79 activities, 44–56 budgeting, 45–47, 52 casting, 66–71 copyright protection, 53, 54–56 crew for, 57–66 described, 44, 231 location scouting, 70–71 previsualization, 71–74 production design, 74–75 production insurance, 53–54 rehearsals, 75–76 release forms, 55–56, 71 scheduling, 48–53 script breakdown, 48–50, 231 security considerations, 56 setups, 49, 112–113 storyboarding, 71–74 preroll, 109 presets custom, 133–140 Final Cut Pro, 173–175 Premiere Pro, 177–178 241
previsualization, 71–74 prime lens, 12, 13, 19 producers, 58 production. See also postproduction; preproduction compression and, 23 described, 44 editorial strategies, 169–170 production assistants, 62–63 production design, 74–75 production designers, 59–60, 74 production monitor, 123–125 production planning software, 51–53 profiles, 231 progressive video, 140, 200, 225 project presets. See presets proofing exposure, 123–125 proofing focus, 123 proposals, 46–47 props, 50, 74–75 pull focus, 231
Q quantization, 22 QuickTime, 214–216, 231 QuickTime reference movie, 188–191, 222
R rack focus, 86, 231 reaction shot, 231 rear pick-up, 231 recorders, 159–160 recording. See also capture process alternatives, 39–42 audio, 17, 24, 154–160 casting sessions, 67 direct-to-disk, 39 memory, 39 multi-track, 157 room tone, 160, 231 sound effects, 160 transferring video to film, 220–221 voice, 156–157 reels, 222 reference points, 107–108 refresh rate, 10 242
Index: Producing 24p Video
rehearsals, 75–76 release forms, 55–56, 71 resolution, 25 resources, 4 reverse pulldown, 231 RGB color, 21, 128, 129, 231 rim light, 231 room tone, 160, 231 rotoscoping, 231 rough cut, 231 rule of thirds, 108, 231
S SAG (Screen Actors Guild), 55 scene breakdown pages, 48–50, 231 scene files, 133–140 scenes analysis of, 75–76 described, 231 Mise-en-scene, 230 segmenting, 48–49 scheduling, 48–53 scopes, 125–126 scouting props, 74–75 scrim, 231 script breakdown, 48–50, 231 script supervisor, 62 scripts analysis of, 48–50, 75–76, 231 segmenting, 48–49 shooting, 47, 49 SD (Standard Definition) video, 33–34 SD video cameras, 39–42 security considerations, 56 set breakdown, 74–75 set design, 74 setups, 49, 112–113 sharing files, 169 sharpness, 86, 113 shockmount, 143, 144 shooting ratio, 232 shooting script, 47, 49 shooting video. See also camera shots keeping locals happy, 56 obtaining permits, 56
scheduling shoots, 50–51 setups, 49, 112–113 time required for, 49 shot list, 49, 119, 232 shotgun microphone, 150–153, 232 shots. See camera shots shutters, 13, 20, 232 skin tone, 139 slate, 113, 158–159, 232 soft box, 232 software, 5–6, 51–53, 169. See also specific programs software digital video recorders, 40–41 sound. See audio sound accessories, 78 sound blankets, 70, 143, 146, 152, 232 sound checks, 149, 158, 232 sound designers, 65 sound editors, 65, 163 sound effects, 160 sound recorders, 78, 143, 159–160 sound recording team, 142 sound recordist, 63, 142, 232 standard definition video. See SD steadicam, 78, 99 stereo microphone, 232 stick microphone, 150–152 stories, 69–70 storyboard artist, 73 storyboarding, 71–74 streaming media, 115–119
T tape. See also DV tape; film gaffer, 229 labeling, 110, 164–165 magnetic, 38–42 master, 221–222 telecine, 27–28, 232 tilt head, 92–93, 112, 232 tilting, 92–93 timecode, 232 timecode breaks, 166 title safe zones, 118–119 titles, 232 tracking, 94–95 Index: Producing 24p Video
tracks, 232 transcoding, 232 transport mechanism, 14, 23–24 trimming, 211 tripods, 5, 77, 93, 95, 232 trucking, 94–95 trucks, 78
V vectorscope, 126, 233 Vertical Detail Frequency, 139 video, 17–28. See also 24p video analog, 20–21 archiving, 168 converting. See converting video digital, 20–22, 33–38 distortion, 12, 88, 102 high-definition. See HD; HDV interlaced, 26–28, 225 Internet, 119, 207–219 progressive, 140, 200, 225 standard. See SD transferring to film, 220–223 vs. film, 122 video bit rates, 207 video cameras. See also cameras basic controls, 131–133 buying vs. renting, 77 care of, 112 choosing, 76 components, 17–20 movement, 92–101 production tips, 112 recommendations, 5 supporting HDV, 37–38 video clips, 167–168 video deck, 5, 78, 112 video engineering, 121–131 video monitor, 5 video online reporter, 163 voice recording, 156–157, 160 VU meters, 154, 156
W waveform monitors, 78, 126 243
web sites companion to book, 4 copyright information, 54 forms, 56 Foundation Center, 46 white balance, 130–131, 233 wind noise, 147, 149, 150 wind screen, 143, 147, 150, 152 Windows Media, 216–218 Windows systems, 5–6 windscreen, 143, 152 workers compensation, 54 writers, 58
X XDCAM format, 38–39 XLR adapters, 143 XLR cables, 111, 143, 144, 150 XML export plug-in, 191
Z zebra pattern, 233 zebras, 125 zeppelin, 143, 147, 152, 233 zoom controls, 96–97, 133 zoom lenses, 12–13, 18–19, 89 zoom presets, 133 zooming in/out, 96–97, 233 zooms, quick, 109
244
Index: Producing 24p Video
-JHIUJOHGPS%JHJUBM7JEFP5FMFWJTJPO OE&EJUJPO +PIO+BDLNBO (FUBDPNQMFUFDPVSTFJOWJEFPBOEUFMFWJTJPOMJHIUJOHGSPNBTFBTPOFEQSP%FUBJMFEJMMVTUSBUJPOTBOE SFBMXPSMEFYBNQMFTEFNPOTUSBUFQSPQFSFRVJQNFOUVTF TBGFUZJTTVFT USPVCMFTIPPUJOH BOETUBHJOH UFDIOJRVFT5IJTOFXFEJUJPOGFBUVSFTBOQBHFGVMMDPMPSJOTFSUBOEOFXDIBQUFSTPOJOUFSWJFXTFUVQT BTXFMMBTMPXCVEHFUMJHIUJOHTFUVQTPOMPDBUJPO TPGUDPWFS QBHFT *4#/
$PMPS$PSSFDUJPOGPS%JHJUBM7JEFP 4UFWF)VMMGJTI+BJNF'PXMFS 6TFEFTLUPQUPPMTUPJNQSPWFZPVSTUPSZUFMMJOH EFMJWFSDSJUJDBMDVFT BOEBEEJNQBDUUPZPVSWJEFP #FHJOOJOHXJUIBDMFBS DPODJTFEFTDSJQUJPOPGDPMPSBOEQFSDFQUJPOUIFPSZ UIJTGVMMDPMPSCPPLTIPXT ZPVIPXUPBOBMZ[FDPMPSDPSSFDUJPOQSPCMFNTBOETPMWFUIFNXIBUFWFS/-&PSQMVHJOZPVVTF 3FmOFZPVSTLJMMTXJUIUVUPSJBMTUIBUJODMVEFTFDPOEBSZBOETQPUDPSSFDUJPOTBOETUZMJ[FEMPPLT GVMMDPMPSTPGUDPWFSXJUI$%30. QBHFT *4#/
"VEJP1PTUQSPEVDUJPOGPS%JHJUBM7JEFP +BZ3PTF 1FSGPSNQSPGFTTJPOBMBVEJPFEJUJOH TPVOEFGGFDUTXPSL QSPDFTTJOH BOENJYJOHPOZPVSEFTLUPQ :PVMMTBWFUJNFBOETPMWFDPNNPOQSPCMFNTVTJOHUIFTFiDPPLCPPLSFDJQFTwBOEQMBUGPSNJOEFQFOEFOU UVUPSJBMT%JTDPWFSUIFCBTJDTPGBVEJPUIFPSZ TFUVQZPVSQPTUTUVEJP BOEXBMLUISPVHIFWFSZBTQFDU PGQPTUQSPEVDUJPO5IFBVEJP$%GFBUVSFTUVUPSJBMUSBDLT EFNPT BOEEJBHOPTUJDT TPGUDPWFSXJUIBVEJP$% QBHFT *4#/
n n n%Z d g Y f f bj%Zf d
7JEFP4IPPUFS 4UPSZUFMMJOHXJUI%7 )%BOE)%7$BNFSBT #BSSZ#SBWFSNBO $SFBUFBDPNQFMMJOHWJTVBMTUPSZGPSBOZQSPKFDUXJUIUIJTFOHBHJOHHVJEF*OEVTUSZWFUFSBO#BSSZ #SBWFSNBOESBXTPOPWFSZFBSTPGFYQFSJFODF JODMVEJOHXPSLPO/BUJPOBM(FPHSBQIJDTQFDJBMT UP JMMVTUSBUFUIFDPNQMFUFSBOHFPGTLJMMTSFRVJSFEUPDBQUVSFHSFBUJNBHFT,FZUPQJDTJODMVEFFRVJQNFOU TFMFDUJPO TFUVQ DBNFSBPQFSBUJPO TIPPUJOHUFDIOJRVFT MJHIUJOH BOEBVEJP"WBJMBCMF+BOVBSZ GVMMDPMPSTPGUDPWFSXJUI%7% QBHFT *4#/
$SFBUJOH.PUJPO(SBQIJDTXJUI"GUFS&GGFDUT SE&EJUJPO 5SJTI$ISJT.FZFS (FUUIFNPTUPVUPG"GUFS&GGFDUT XJUIUIFSEFEJUJPOPGUIFCFTUTFMMJOH"GUFS&GGFDUTCPPL5IJT GVMMDPMPSHVJEFDPWFSTUIFDPSFDPODFQUTBOEUPPMTZPVOFFEUPUBDLMFWJSUVBMMZFWFSZKPC BOEEPJUXJUI BSUJTUJDBOEUFDIOJDBMnBJS'FBUVSFTOFXDIBQUFSTPOEZOBNJDUFYUBOJNBUJPOBOEBOJNBUJPOQSFTFUT BT XFMMBTIVOESFETNPSFOFXTIPSUDVUTBOEFOIBODFNFOUT GVMMDPMPSTPGUDPWFSXJUI%7% QBHFT *4#/
1SPEVDJOH(SFBU4PVOEGPS%JHJUBM7JEFP OE&EJUJPO +BZ3PTF 1SPEVDFDPNQFMMJOHBVEJPXJUIUIJTBSTFOBMPGSFBMXPSMEUFDIOJRVFTUPVTFGSPNQSFQSPEVDUJPO UISPVHINJY*ODMVEFTTUFQCZTUFQUVUPSJBMT UJQT BOEUSJDLTZPVDBOVTFUPNBLFHSFBUUSBDLTXJUI BOZDPNQVUFSPSTPGUXBSF"VEJP$%DPOUBJOTTBNQMFUSBDLT EFNPT BOEEJBHOPTUJDUPPMT TPGUDPWFSXJUIBVEJP$% QBHFT *4#/
n n n%Z d g Y f f bj%Zf d
What’s on the DVD The DVD includes a DVD-Video portion that is readable in a DVD-Video player and DVD-ROM portion that contains production templates and sample footage.
Updates Want to receive e-mail updates for Producing 24p Video? Visit our web site www.cmpbooks. com/maillist and select from the desired categories. You’ll automatically be added to our preferred customer list for new product announcements, special offers, and related news. Your e-mail address will not be shared without your permission, so sign up today! Further, if you would like to contribute to the effort by reporting any errors, or by posting your own tips, please contact the author at [email protected].