Home»OPINIONS»Columns»The cost-effective HD feature film Part 3 Post-Production

The cost-effective HD feature film Part 3 Post-Production

0
Shares
Pinterest Google+

In Parts 1 and 2, we focused on the pre-production and production considerations of putting together a cost-effective HD (high definition) feature film. In this part, we will conclude by discussing HD post-production, including some details on the RED workflow.

ian1.jpgPost-production tends to be an overlooked portion of the process, because people tend to assume that they can “sort it out later”. But knowing the proper HD post-production workflow can be a massive time, headache and money-saver.

INTRODUCTION
First off, you need to know that there are several basic stages in HD post-production, or any feature film post-production: Offline Editing, Online Editing, and Colour-grading.

Offline Editing is where creative editing happens. You basically work with lower-resolution and/or lower-quality versions of your HD footage, hence allowing you to cut and render very quickly. Everything is in draft quality, and cannot be used for final delivery.

Because Offline Editing is where the story is put together, I’d recommend using a professional editor whom you trust, if possible. However, I know that many indie filmmakers won’t be able to afford a professional editor at normal rates, so they’ll either ask for a favour, or a special rate, or cut the film themselves – hence the existence of this article. The more you know before you start DIY-ing, the better.

Online Editing is the polar opposite of Offline – it’s almost entirely a technical process. The Online Editor has to take the Offline Edit and (sometimes painstakingly) match all the cuts in order to bring  the footage up to full quality, and full resolution. Most times this entails actually re-digitising the necessary footage from the original HD tapes.

Colour-grading refers to the act of tweaking the colours for final delivery. There are several reasons for doing this:

•    Aesthetic – Trying to achieve a certain look that the Director and Director of Photography are going for, whether it be cool, saturated, warm, bleach-bypassed, etc. Most people understand this.

•    Continuity – Making sure shots match in terms of look when cutting from one to another within the same scene. And also making sure that there is visual continuity throughout the whole movie.

•    Technical – Making sure the HD footage stays within the specifications for 35mm film transfer, D-Cinema transfer, or broadcast. Because of the various standards, sometimes a film needs to go through several colour-grading passes – one pass for each standard.

•    Corrective – Every once in a while, the Director or Director of Photography are unable to fix a problem on set, and require colour-grading to fix it. These should be avoided as much as possible. Example – running out of time in the day, and shooting a day scene during the evening, then fixing it in colour-grading.

Most editors don’t know colour-grading, and most Colourists (people who do colour-grading) may not be editors. Colour-grading is a very specific skill set that combines both aesthetic and technical aspects, so not everyone can do it. Best leave it to professionals, if in doubt.

For this article, we will be focusing on preparing a project for Offline Editing. We won’t go into Online Editing and Colour-grading, as those are areas that require deeper technical know-how. What we want to do is get you going on your creative journey of actually cutting the film in Offline. You should always seek professional assistance for Online and Colour-grading, if you are unfamiliar with those processes.

Before commencing Offline, always speak to the post house doing your Online, so they can explain any specific steps they require you to take. That’s something that we, as a post house, always encourage. It doesn’t cost you anything, and can save you (and us) a world of pain later on. We will cover most of the considerations in this article, but other post houses may have some of their own.

At times, I will make reference to the Final Cut Pro method of doing things. The reason for this is simple. Most production houses and independent filmmakers in Singapore are using Final Cut Pro. This doesn’t mean that it’s the best tool for editing. I use others, and work on other platforms, as well. It just means it’s the most popular, and more people are likely to connect with the contents of this article.

For the most part, I’ll be speaking generically about how to go about preparing a HD project for Offline. So while the file formats and codecs may vary, many of the ideas can also be applied to Avid, Premiere, etc.

At the end of the article, I will also cover Audio and Graphics considerations.

PREPARING A PROJECT FOR OFFLINE EDITING

DURATIONS
If you’re cutting a HD feature that will eventually go to 35mm film, you have to remember that films are not shown using one gigantic reel. They are cut up into several reels. Then, during projection, the reels that make up the feature film flow seamlessly into each other.

Each reel of 35mm film is, at most, 20 minutes long. So when editing your feature, you have to remember to cut it into 20-minute sequences. This ensures that you do not cut to another reel in the middle of a scene.

Why is this important? Because film is a photochemical medium, and is thus prone to inconsistencies. Whether it’s the batch of film stock, or the number of times the chemicals have been used, or any number of factors in the whole chain of steps – any of them can result in colour shifts from reel to reel.

It would be best if the scene at the end of one reel has no aesthetic relation to the starting scene of the next reel.  E.g. the last scene of the previous reel takes place in an entirely different location from the first scene of the next reel.

SD OR HD?
Knowing how to choose and manage your Offline files can save you plenty of time and money later in Online.

The first thing you’ll have to determine is whether your system can work Offline in HD or SD (standard definition). Regardless of which, you’ll be working with compressed files, because uncompressed files are too large for most single hard disks to play back comfortably.

Now, why should anyone want to do Offline work in HD? SD is smaller, and more nimble, so why the hassle?

The same reason why Peter Jackson cut King Kong in HD. To check for problems:

•    Focus – It’s easier to check whether your focus is spot-on when watching the footage in HD. When it’s shrunk 4 to 6 times to SD, it can be very hard to tell, even for pros. There’s no point cutting in a take that’s out of focus, then having to replace it in Online later on. By that time, it’ll be painful for all involved, because it will affect the audio post as well.

•    Continuity – The story goes that there were several continuity errors made in Lord of the Rings – The Fellowship of the Ring. Hobbits wearing the wrong wardrobe, etc. That was because Peter Jackson was editing in SD at the time. He couldn’t spot the mistakes as easily. But during King Kong, he could catch extras in period garb wearing cellphones by accident, because he was working in HD.

So cut in HD, if you can afford to. However, this doesn’t mean you need a professional HD monitor or HDTV while you edit. It’s great if you have them, but then I will begin to question if you’re really a cost-conscious indie filmmaker or not. In any case, a HD monitor will just mean that you can view the HD footage at 100% in your NLE when needed, in order to spot the above problems.

If you’re using an Apple G5 or a Mac Pro with FCP 5 or above, you can probably work Offline in HD. Or if you’re on a fairly new PC with Premiere CS3 or above, you can probably work in HD as well.

If you’re using an Apple G4 with FCP 4 or below, go with SD; and if you’re using an older PC with Premiere CS2 and below, I’d recommend the same. This is because older systems’ processors may not be able to handle decoding large amounts of files in HD codecs.

PREPARING “PROPER” SD FILES FOR OFFLINE

If you must cut your Offline in SD, this section is very important to note.

HD is usually displayed in 16:9, while SD is generally 4:3. All SD codecs usually only store information in 4:3 dimensions. Mostly 720 x 576 for PAL, and 720 x 480 for NTSC.

So how do you go about editing 16:9 information in a 4:3 codec? The answer is “anamorphic 16:9”, also known as “full-height anamorphic”.

When down-converting from HD to SD, you have the option of working in 4:3 letterbox (with black bars above and below), or in anamorphic 16:9, which squeezes the 16:9 footage into a 4:3 space, making everything in the frame look skinny.

You should always do the latter for HD, because this ensures that you’re still working in the same visual space, albeit at lower resolution. Most NLEs have the ability to unsqueeze the 4:3 footage to 16:9 for viewing while editing, so you can work in 16:9 in SD.

Why is this important? Because of placement of titles, graphics, etc. If you placed text at a certain position within a 4:3 letterbox frame, chances are it’s going to shift when you change it back to 16:9 during Online later on. FCP won’t be able to make a proportionate change, because it wasn’t working in the same proportions in the first place.

WORKING WITH TAPELESS HD FORMATS (SOLID-STATE)
If you’re working with P2 DVCPRO HD or XDCAM EX, your footage can most likely be imported natively into Final Cut Pro and other NLEs. In other words, you can edit with the original codec you recorded on camera.

If you’re working with P2 AVC-Intra, you can’t import natively – yet. You’ll have to transcode to Apple ProRes 422 (Apple’s proprietary codec), or another codec, depending on the NLE.

You can choose to import directly from the cards, or from the external hard disk you backed up to. If you have no idea what the above codecs are about, please go and read Part 2 of this series here.

Now, whether you should work in the native formats is another issue altogether.

DVCPRO HD is an intra-frame codec (as mentioned in Part 2), so it will work fine on most modern workstations, even Apple G5s. You can even edit it on a MacBook Pro, off an external 3.5” SATA hard disk, if you so wish. Just make sure that it’s a Firewire 800 (or at least a Firewire 400) casing, if possible.

Most modern 3.5” SATA disks can sustain an average of 30 to 40 MB/s. DVCPRO HD is only about 14 MB/s, so it’s relatively easy for the disks to keep up.

If you’re using the latest Intel processors for the PC or Mac, you can go ahead and work with XDCAM EX, but get plenty of RAM. Remember it’s long-GOP, so you’ll have to watch out for performance hits as your edit gets longer. It’s not about data access speeds, so don’t worry – you can probably work off external 3.5” Firewire 800 hard disks – the concern is whether your processors will be overly taxed.

If you’re using an older system, you’ll probably want to convert the XDCAM EX into a more manageable codec, e.g. DVCPRO HD, Photo-JPEG, or a standard definition codec like DV.

If you’re using a very old system that can’t play DVCPRO HD properly, you might want to down-convert it to DV, which is very easy on the processors.

Regardless, conversion can take quite a bit of time. Especially when downsizing from HD to SD. And you have to make sure that the conversion software – whether FCP itself or Compressor or some other software – accurately embeds the original timecode into the Offline files. If it doesn’t, Onlining it later might be quite impossible.

To be safe, you might want to burn timecode into the visual as well. You can do this in Compressor, in order to save time.

Because you need the latest versions of the software in order to read these new solid-state formats, you may need to do your conversions on a newer machine before doing your editing somewhere else.

If you’re in doubt about the above conversions, it will probably be best if you ask your Online facility to do the conversions for you. They will probably do it for a nominal sum, and you’ll have peace of mind, which is worth plenty.

WORKING WITH HD TAPES
If you’re working with HDV or DVCPRO HD tapes, you can also transfer the footage natively via Firewire 400, using the relevant tape deck. Again, you’ll be working with the original codec that was shot on tape.

There is one caveat when it comes to tape-based DVCPRO HD though. This is due to the evolution of the codecs as the various models of professional DVCPRO HD camcorders were developed.

If you shot on the HDX900, you’re fine, because you’ll probably have shot 720/25p (over 50p). This will transfer fine over Firewire, as long as you have access to the Panasonic HD1400 deck (which is available for rent). The older Panasonic HD1200 deck can’t be used for the HDX900 footage, as it won’t play back the 720/50p codec at all.

If you shot on the tape-based VariCam at 720/23.98p (over 59.94p), you can use Firewire to transfer as well. You may need to run it through FCP’s DVCPRO HD Frame Rate Converter tool after transferring.

However, if you shot on the tape-based VariCam at 25p (over 60p), you can’t use Firewire transfer at all in FCP. At least, not at time of writing. Canopus Edius, can, but Final Cut Pro can’t. For FCP, you’ll need to digitise via HD-SDI via a capture card from Blackmagic Design or AJA. This is because FCP does not currently have a software function to extract 25p from a 60p stream.

I know the above three paragraphs sound very confusing, and I apologise, because the workings of the VariCam deserves an article on its own for this all to become clear. Perhaps it’s something we can explore in the future, but not within the pages of this article.

Just remember that shooting on the tape-based VariCams at 25 fps will require the use of a dedicated capture card. This is also true when working with HDCAM, as most of its decks have no Firewire output. And those that do are outputting DV via Firewire, not HD.

If you don’t have access to capture cards, or don’t know the settings well enough, ask your Online house to help you digitise the footage for a nominal sum.

The same conditions for working in NLEs with DVCPRO HD as mentioned above (in the solid-state section) still apply. Work with it if you have the processor power to do so. Down-convert to DV if you can’t.

As for HDV, it’s an inter-frame long-GOP codec, just like XDCAM EX is, but older. You can work in it natively as long as your processor is fairly new and powerful. If you can’t, then you may want to consider transcoding to DVCPRO HD before commencing the edit. DVCPRO HD is at least 3 times larger than HDV in terms of data rate, but it’s easier to decode.

If you can’t edit HDV in HD resolutions at all, you can use the HDV deck or camcorder’s inherent down-convert capabilities when capturing. It’s available on the consumer models as well. Look in the menu for an option to convert HDV to DV. In Sony products, it’s referred to as “i-Link Conversion”.

This will allow you to digitise the HDV footage as DV footage via Firewire. Remember to check that the timecode is exactly the same. Some early models of HDV camcorders had a 1-frame delay, so it’s good to double-check.

A NOTE ABOUT WORKING WITH RED FOOTAGE
REDCODE RAW, the codec shot by RED cameras onto Compact Flash cards or RED-DRIVES, is a really new format, so many people are still coming to grips with it. Its resolutions (4K, 3K and 2K) exceed what we usually work with in the world of HD.

There are several ways to prepare RED’s R3D files for Offline.  Take note that while the RED software is free to download for everyone, all of the methods below will require you to have an Intel Macs, and sometimes even specific hardware/software:

•    Work with Proxies – You can use RED ALERT!, one of RED’s own free programs, to generate proxies for editing in FCP. These proxies are Quicktime reference movies that allow you to extract half-, quarter- or eighth-sized images from the R3D files in real-time. This is extremely processor-intensive during Offline, but it saves time initially, because there’s no conversion time involved. The RED ONE camera can also generate proxies in-camera, so speak to your cameraman about turning on this function – that saves you even more time.

•    Importing with Log and Transfer – With the latest update to version 6.0.4, Final Cut Pro can now transcode from R3D to ProRes 422 via the Log and Transfer window. The down-conversion is time-consuming, and the resulting movies’ may have a gamma shift, but it’s less processor-intensive during Offline, because you’re working with ProRes 422 files instead. It also means you can import multiple Offline files at once, all from FCP. You may not want to colour-grade with the ProRes 422 files though, as you’re not getting the full colour information originally inherent in the R3D files.

•    Exporting Quicktime Movies in REDCINE – If you want to do a one-light (basic) colour correction to the footage before editing, you can do so in REDCINE, RED’s other free proprietary program. After correction, you can then export ProRes 422, Motion-JPEG, DVCPRO HD or other codecs for Offline Editing. This is extremely time-consuming, but useful for those who want an Offline Edit with a close-to-final look for preview screenings. You’ll also need to make sure you’re using a compatible graphics card for REDCINE to work properly. Most people will probably not go this route.

•    Exporting Quicktime Movies in RED ALERT! – As in REDCINE, RED ALERT! will also allow some level of primary colour-correction, but it’s even more limited than REDCINE. The main advantage of RED ALERT! is that it doesn’t seem to require specific graphic cards.

•    Exporting Quicktime Movies in REDrushes – This program is fairly new, and comes with the RED ALERT! package. This is the first program from RED to allow relatively-speedy batch processing of R3D files into working files for editing, hence the reference to the word “rushes”. But there’s a trade-off – what you gain in speed, you lose in quality. The quality produced by REDrushes is not comparable to RED ALERT! or REDCINE.

During Online, it’s best if the Online house can work with 4:4:4 colour space, 2K resolution files for colour-grading.

Please keep in mind that, regardless of the route you take, the RED workflow is extremely time- and processor-intensive. I recently processed 1.5 hours of footage in REDrushes into SD, and that alone took about 24 hours to finish rendering. Only after that could I begin cutting.

The RED workflow and terminology can also be fairly confusing, so either take time to learn the tools beforehand, or work with people who know them, in order to save time in post. You’re going to have to wrap your mind around new ideas like colour spaces, debayering, etc. Unfortunately, we don’t have enough time in this article to cover the RED workflow in depth – perhaps in a future article.

IS 4K IMPORTANT FOR ME?
Since the RED ONE can shoot 4K, is it necessary for me to post in 4K? The short answer is “no”.

This doesn’t mean you shouldn’t shoot 4K. You should, in order to take full advantage of the sensor. However, you can downscale to 1080p or 2K during Online.

Why not go all the way to 4K, since the RED ONE can acquire 4K files? Because of the processing power required, and the hard disk requirements. 4K will eventually come to the fore, but it hasn’t yet, at time of writing.

Although 4K is ideal for 35mm film transfers, most Hollywood features are currently still undergoing the Digital Intermediate process at 2K. Digital Intermediate (DI) refers to scanning 35mm film, colour-grading it, then printing it back to film. Most current DI’d Hollywood feature films in the theatres are still processed at 2K, and usually look much better than their traditional, colour-timed, photochemical-only counterparts.

Why? Because the photochemical process results in multiple steps of generation loss.  Count ’em:

1.    In the process of colour-timing (with red, green and blue lights), you create an inter-positive from the original negative. 1st generation loss.

2.    Several inter-negatives are then created from the inter-positive. 2nd generation loss.

3.    The inter-negatives are then used to create release prints, which show up at your film distributors. 3rd generation loss.

4.    Subtitling requirements per country might result in more generation loss, depending on how the process is handled.

5.    Then, depending on how well the projectionist treats the reel, the release print will be dirtied, scratched and generally damaged every time it’s shown.

Whereas in Digital Intermediate:

1.    Film negative is scanned into computer at 2K resolution, uncompressed, with 4:4:4 colour information.

2.    Film is colour-graded. Because the footage is uncompressed, this process cannot degenerate the info.

3.    Footage is then printed back to 35mm film. If budget allows, every inter-negative will be printed directly from the data.

4.    The inter-negatives are then used to create release prints, which show up at your film distributors. 1st generation loss.

5.    Then, depending on how well the projectionist treats the reel, the release print will be dirtied, scratched and generally damaged every time it’s shown.

As you can see above, the photochemical process cannot really equal the 2K DI process for quality and fidelity. So 2K is still sufficient for 35mm transfers.

WHAT ABOUT AUDIO AND GRAPHICS?
Of course, besides Offline, Online and Colour-grading, there are other considerations for post-production too. I’ll cover them briefly here, but honestly, some of these topics deserve their own articles.

AUDIO POST-PRODUCTION
The audio post house is an integral part of the post-production. They will work on the film’s sound design while colour-grading is happening, and co-ordinate with composers and musicians for your soundtrack, etc. Without a good score and sound design, a feature film is handicapped.

Before you start audio post, you have to decide is whether you want to go Stereo or 5.1, so you can let the audio post house know. Most people instinctively go towards 5.1, but that isn’t always the “right” choice.

5.1 is only useful when you need sounds that surround the audience. E.g. a vampire rustling in the forest behind you, explosions from the tanker on the rear right, etc. If you’re doing a drama with a focus on dialogue, 5.1 won’t really add anything to the experience. Stereo would be sufficient.

Why do I highlight this? Because the Dolby 5.1 license alone is about S$8,000, depending on the exchange rate. That doesn’t include 5.1 mixing in a Dolby-certified theatre environment, which is necessary for getting the license. Sound design and pre-mixing for 5.1 is also more expensive. If you’re a cost-conscious filmmaker, these tens of thousands of dollars are important to you.

If you must go 5.1, keep in mind the above considerations. There are two 5.1 standards for theatrical release – Dolby Digital and DTS. And they are transferred to 35mm film differently. Remember to do your due diligence to research both.

For Stereo, the transfer is relatively straightforward. If you’re going to 35mm film, you’ll need to transfer the audio information onto the optical track of a film negative, which can then be merged with the visual film negative to form a print that can be projected.

If you’re shooting 25 frames per second, remember to implement the 4% pitch shift in audio post that was mentioned in Part 1 of this series.

And if you’re releasing DVDs or broadcast tapes, those need a slightly different mix too, whether 5.1 or Stereo. Again, the techniques and technology are slightly different, and we’ll talk about them in another article, if there’s demand for it.

GRAPHICS
Visual Effects, Motion Graphics and Titles come under this category.

Visual Effects
Once you’ve finished your Offline, you’ll know which specific shots need effects work. If you want, you can send your Offline shots to the effects guys so that they can begin work, but make it clear to them that this is not the final quality or resolution, and that you need to pass them Onlined versions of the shots later on. Also let them know what HD resolution you’re finally delivering, 1080p or 720p.

If you’re not confident of the timing of your shots, you can cut in the draft renders of the effects shots first, to make sure the cut works. Then lock your edit, and send it off to the post house for Onlining.

Inform your post house that there are effects shots that need to be sent out to an effects house, so they can prep those shots for you. The post house will have to decide with the effects house whether to colour-grade before or after the effects have been implemented. Usually the latter happens, so the post house can make the necessary adjustments for the effects to fit within the scene. Regardless, the two parties will come to some sort of agreement over the workflow.

Once the visual effects guys receive the Onlined shots, they can finalise the effect, and render out the full-quality version, which can then be cut into the Online sequence.

Motion Graphics
There are usually two kinds of motion graphics:

I)    Those that dominate the whole frame. E.g. opening sequences, illustrations, etc.

II)    Those that float above your video in a separate layer. E.g. lower-thirds for names, gradients, etc.

For the first type, it’s relatively simple. You ask your graphic artist to create the motion graphic in HD. Then you or the post house will convert it into the Offline codec being used for editing. Make sure to use the same basic file name, but don’t overwrite the original file by accident. E.g. Opening Seq HI and Opening Seq LO. Keep the Offline and Online versions of the graphics in separate folders.

Then cut the Offline motion graphic into your Offline. During Online, remember to pass the original, high-quality HD motion graphics to your post house, so they can replace your Offline copy with the Online one.

For the second type, you’ll need the graphic artists to do a little more work. On top of originating the graphics in HD, they may need to render a separate Offline version for you (for those working in SD for Offline).

This is due to the presence of the alpha channel used to composite their graphics onto your video. It also ensures you don’t need to do scaling adjustments during Online. Other than that, the same principles mentioned above apply – use the same basic file name, but keep them in separate, carefully-labelled folders.

Titles
Titles refers to text that’s overlaid over your visuals. Chances are, you’ll use these for your opening and closing credits, as well as for names for lower-thirds (if you’re doing a documentary).

There are generally two kinds of titles – titles created in motion graphics, and titles created in the editing software.

For titles created in motion graphics, refer to the guidelines above for motion graphics.

Titles created in editing software, or NLEs (non-linear editors), aren’t usually real graphics. They are metadata – information describing how the text should look like.

Because of this, in the case of FCP, titles will automatically resize themselves (without loss in quality) when going from Offline to Online. This is, of course, if the Offline project was prepared correctly – see section on “Preparing ‘Proper’ SD Files for Offline” above. If it wasn’t, you may find your titles falling out of position, being resized wrongly, etc. This results in more delays during the Online stage.

CONCLUSION
We’ve finally reached the end of this series. I’ve tried to be as exhaustive as I can, and I hope this series of articles on maximising technology for cost-effective filmmaking has been helpful to you.

If you feel there are any other areas that haven’t been covered yet, please feel free to comment, or email Sinema to request more information. As you’ve noticed, the worlds of HD and filmmaking are full of new things to be learnt, and we can always do follow-up articles to expand certain topics.

Till then, see you, and have a blast with your feature films. I look forward to seeing more intelligent, workflow-aware filmmakers arising in our industry. =)

Previous post

Sinema 2.0 is Hiring!

Next post

Stefan Says So: Sing to the Dawn Review

No Comment

Leave a reply