Every composer would like to have the services of a fully qualified score mixing engineer on every project. You would, wouldn’t you? (Insert picture of smiling Score Engineer, like the one in my mirror.) As any specialist, he or she would bring a wealth of knowledge and experience that would make your life easier and your score sound better and more competitive.
But – - welcome to the real world – - I realize it doesn’t always work out that way. For reasons of budget and expediency a composer is often called on to record, mix, and submit a finished score on his or her own. As each composer must own the computers and other technical equipment that make our modern music business possible, it’s advantageous to know what to do when you need to, so that you can fend for yourself and compete.
For a dozen years I taught a class at UCLA Extension, in the Film Scoring program, called “Staying in Sync”. It started out being the “time code class” but evolved over the years to being all about digital audio, synchronization and timing issues, and miscellaneous studio tips for composers.
Here are some guidelines, in two categories. For those of you with a bit more experience this may serve as a review, and for others, a good starting checklist.
Sample rate and bit depth
This topic needs to start with a discussion of delivery requirements. In every score project that you are ask to do, there will be specific technical requirements for the format for music delivery. You need to find out what those are from your client; don’t just go with “what ever you usually do”.
Most film and television scores are delivered at a sample rate of 48KHz, at bit depth of 24-bit. This is in contrast to music CD work, which has a sample rate of 44.1K at 16 bit. But, don’t just assume – - ask. Most software can do conversions between all the common sample rates and bit depths, but your product will sound better if you start out in the format of delivery.
I still sometimes hear, 48K sounds better than 44.1. This is a myth that got its start in the early days of digital audio. In the technology of digital audio the difference would theoretically amount to less than one-third octave above 20KHz (beyond the top end of human hearing). In reality today’s digital interfaces often have the same filtering for the two sample rates. However, what does make things sound (a bit) worse, and is a pain in the kiester besides, is sample rate conversion. Who wants to change all those files when you are finally finished and up against your deadline? Just go with the specified rate.
There is one exception to this that isn’t really an exception at all. Besides delivering the score, everyone wants good sounding recordings for the “demo reel CD”. For this you will have to do sample and bit rate conversion, to 44.1-16. Just use the highest quality conversion setting available in your software and it will be fine. You can do this after the deadline has passed and you have some spare time.
What about the high sample rates that some software and hardware supports these days – - 88.1, 96, 176.2, 192? In some cases that may be worth it to a small degree, such as extremely high fidelity orchestral recording. But bear in mind that some aspects of your system may not be compatible with those rates, such as software synths, and you will still have to do a conversion before delivery. Plus, recording at higher rates will consume a ton of hard drive space, like filling a bathtub with a fire hose.
Now for bit depth. 24-bit has become industry standard, superceding 16-bit. You should definitely go with 24-bit. Just set everything that way and leave it. Why? In a nutshell, digital audio becomes cleaner (less distortion) the louder you record, up to 0, where the red light comes on – - whoops, distortion, “sorry, you played great but can we get that once again please?” 24-bit has a wider “sweet spot” where it sounds good, so you don’t have to have everything quite so loud. Also, mixing “in the box”, plugins, etc. all sound better at 24-bit. If your delivery requirements are 16-bit for mixes, make the conversion at the end and you will still have good sound.
SMPTE time code used to be a complicated big deal – - I taught a whole six-week class on it. Fortunately things have gotten a lot simpler these days.
Let’s review: time code is basically a way of numbering the frames of a film, video, or audio recording. It gives you location information, and allows video and audio to be synchronized together. There are several different formats of time code, based on the number of frames per second of a particular video format, i.e. film or television. Our audio software can work with any type time code, but has to be manually set to the same format as your video.
For a score project, you will typically be supplied with a Quicktime video file, and (you may have to specify this) it should have on the picture a visible box with time code numbers (sometimes called visual time code or window burn). (Linear time code on an audio channel is unnecessary.) It’s customary for the first frame of program picture (not including leader) to be at a time code address of eight frames past 01:00:00:00. The numbers should start ten or more seconds before that, to allow for a preroll, although the software doesn’t really need it. Sometimes the start mark will be in the neighborhood of 2 hours, or greater, such as in a film score that has more than one reel. However, the time code should never start with 00:00:00:00. The very first frame of the video should start with “rolling” numbers, that is, no frozen frame members and then starting after a few seconds. Time code should be continuous and never stop or skip. If (horrors!) they re-cut the video they will have to generate new time code, but the very first frame of program should be the same number it was before.
The frame format and numbering decisions will usually be made by a post-production supervisor or video/film editor, who will use the editing software to stripe continuous time code numbers on the frames. Hopefully they will tell you what the time code format is, or in Quicktime software you can use the “Movie Inspector” under the Windows menu. Make sure your software is set to match.
Besides the various frame rates, there is the issue of drop frame. Two of the most common time code formats are 29.97 drop frame and 29.97 non drop frame. Explaining the distinction between those two would be a fine cure for insomnia, but suffice to say they are not the same; again, find out what format is on your video and set the software or machine to match.
I should also mention that there’s a new format of time code, 23.976, which is there to accommodate new HD video formats. I know that Digital Performer 6 can work with those, but DP5 cannot – - you may have to upgrade.
Once the video is imported into your composing software there are two steps: first you must set the software so that its time code numbers equal what is on the video screen. In Pro Tools this is called “spotting”; in DP it is “set movie start”. Other apps have similar settings. Make the numbers match the very first number in the video. Then, second, set bar 1 beat 1 of your cue to the time code number in which you want it to start.
In a professional organization of which I’m a member, another score mixer sent out promos for a seminar in which he was going to teach composers what they needed to know about score engineering. I’m sure he had many wise things to say, but my eye was caught by his assertion that “there is a particular time code format that every composer should insist on”. In case this has caused any misconceptions, let me clear this up – - under no circumstances that I have ever seen did a composer have the choice of what type of time code will be used. Someone else makes that choice. It’s no big deal – - just find out what type is being used.
More to come
I’ve got some things to say about score mixing, both technical and “artistic”, but looks like this is about enough for this time. I want to cover some things about plug-ins, and of course I won’t forget “stems mixing”, so check back with me for Part 2.