The Quick answer on levels!

Q1: While I’m editing, should I normalize each clip to -6 (or 0 or -12)?
Q2: What should be my target level on the first pass?
Q3: How about the final pass?

Before we can put a number on the target, there are a few things that need to be clarified. First, you’re probably looking at the wrong meter to answer that question. Second, the number you settle on depends on your delivery method. Broadcast is different from web, for example. Third, are you looking at peak level, or some version of average level – which will give you more of an indication of loudness?

For an in-depth look at all of this, check out part 1 of the full article on

For a quick answer, try this:
A1: I personally never normalize. (yes, I said “never” – and I meant it).  What do I do instead?  First step is to adjust the level of each clip non-destructively.  Each NLE has its own terms:  Avid uses Clip Gain, which translates to ProTools non-destructively via AAF.  Adobe Premiere has two methods at the clip level.  The non-destructive method is to adjust the gain in the sequence while viewing “clip keyframes”.PPCC Clip Keyframes  The other method is called “Audio Gain”, which is non-destructive inside of Premiere, but becomes permanent when exporting, which can be problematic.

A2:  Your target level on the first pass can be a little lower than your deliverable level, to allow for gain in the output stage, or it can be the same.  It’s a personal mixing choice.  While editing, it is important to maintain a consistent average level, which can be adjusted globally later in the process.

A3:  If you have a loudness specification, use that as your guide.  If you don’t, then you just have to know what kind of meter you have and mix accordingly.  I have included some general suggestions below:

U.S. Broadcast:  -24LKFS / -2dBTP (True Peak) or approximately -24dB RMS with peaks around -6dBFS (to be safe)
Web video:  -18LKFS or -18dB RMS
Podcast audio:  -16LKFS

The bottom line:  Mix using average levels, because what we care about is how loud something sounds, not where the digital peaks are.  Control your peaks using a limiter, and hopefully other dynamics processors along the way.  Stay tuned for more on Audio Levels & Metering!

Update:  Part 2 of the article is up!  It contains a review of metering and FOUR videos for your viewing pleasure!


Audio Tracks in Premiere

Properly setting up audio tracks before getting started will streamline your edit process, give you more flexibility, and get you ready for exporting files quicker.  From setting up a multi-channel sequence for splits or Mix/Mix-minus to naming tracks and audio routing.  You will get something out of this quick tutorial.

Let me know what you think on Twitter.

If you like this post, please share!


Mixing in Premiere

Whether finishing in the NLE or just for approvals, every edit needs some amount of audio mixing.  You’re doing it whether you pay any attention to it or not.  So how do you approach a mix?

Step 1: Level it out

Step 2: Discover Track-based plugins

Step 3: Learn a few specifics about EQ and compression

This quick tutorial covers a great deal of ground in a fly-by approach, from how to level out your program and an intro on metering to track-based plugins including EQ and compression.  I even touch on the benefits of submixing. For more on metering, check out my article on

Looking for a little more in any particular area?  Hit me up on Twitter.
  If you like this post, please share!


What Makes Editing VO Different From Editing Dialog?

A good dialog edit is a work of art.  You balance room tones (or world tones); you time every edit seamlessly around breaths and natural speech patterns; you do everything within your power to make the listener believe that it was all spoken at one time, by one person, in one place.  There are no holes in your edit – each one filled perfectly with the correct room tone, whether gathered from the takes themselves or deliberately recorded on set in the exact environment that you are matching.

Voice-over is a different thing.  Breaths between sentences and paragraphs are cut.  Even breaths between phrases may be cut.  Mouth noise is reduced or eliminated.  In most cases, the VO will sit over top of music and/or NAT sound, so you want it to be as clean and clear as possible.  When you cut out a breath or noise, decide whether you need to close the gap where that breath used to live, or if you need to leave it alone or extend it.   For example, if a 5 frame breath is taken in the middle of a sentence, it’s possible that at least three of those frames can be cut.  It’s also possible that the same 5 frame breath at the beginning of a sentence should become 15 frames of silence.  The rhythm of the performance is now yours to decide.

1_dialog2_VOWhile you may be tempted to leave your VO seemless like dialog, you may find that it actually sounds better to trim every region to only what is needed.  Not only does this aid in the elimination of mouth noise and breaths, but it causes you to focus your attention to every detail of the voice-over performance and you’re more likely to slide phrases around and put the pauses where they can better help tell the story.

It’s possible that you know more about how this should be performed than the person who voiced it (unless your VO talent IS the producer, writer, etc. – then you may defer to them).  Opening up between sections, paragraphs or sentences and tightening up (or loosening) list items or detail points are all ways that you can greatly improve the power of the voice-over in the story-telling process.

If you’re recording the VO, give a little “pre-production” thought to the rhythm that you need for the piece.  This could influence the casting of the VO talent or just coaching them during the recording.  Of course, remember proper etiquette if you’re working with a director or producer and don’t step on any toes.

Some of this thought process can be applied to dialog as well, but just remember when editing dialog to fill the holes!

Comments?  Find me on Twitter:


Keyframes vs. Fades in NLE

So, we had some great discussions in #postchat (5/28/2014 Transcript here: regarding the use of keyframes for volume automation in the NLE and how it is received via OMF/AAF to ProTools (or other DAW using OMF/AAF).  Also, when to use fades vs. keyframes and how clip gain plays into that.  I’d like to expand on all of that just a bit.

All the major NLEs will translate fade information reliably through OMF or AAF (I’ll explain the difference between these formats in another post).  First thing to realize is that each NLE treats gain (volume) differently.  I’ll briefly summarize here:

Avid (Media Composer)

MC uses clip gain as well as keyframe automation, which both come through the OMF/AAF non-destructively.  The newer versions of ProTools use clip gain as well, which works exactly like it does in Avid.  (Older versions of PT used to convert clip gain to volume automation, and it could sometimes drive me crazy!  Editor would cut a clip to apply clip gain with a crossfade, and I would get the crossfade AND the keyframed automation – it was bizzarre.  Now, I get the clip gain the way it was intended and it’s beautiful!)  This is why the workflow of applying clip gain and using crossfades is so solid.  It is less “audio guy” like, but it’s a great workflow from Avid to PT, and I’ve found myself using it more and more.

Final Cut Pro 7

FCP versions 6 and 7 use keyframe volume automation, but no clip gain.  Very straight forward.  If you’re on a version earlier than 6, your options are very limited.  FCPX is a different animal, and there are great resources out there to help you.

Adobe Premiere Pro CC

Premiere is more complicated.  You can apply gain to a clip, which is destructive.  You can use clip gain right on the track to add up to 6dB, and you can also keyframe on the track (same as in Track Mixer).  Clip keyframes will translate via OMF, and they can be smoothed with crossfades.  Even though the interface may show a sudden jump in gain, the clip gain comes before the crossfade and is actually smoothed out by it.  If this sounds confusing, please leave a comment below.  Much more information is coming on this topic.

The use of keyframes as fades

It is common practice for editors to use keyframe volume automation to ramp up the audio at the beginning of a clip (music, NATs, etc.) and to ramp it down before the end of the clip.  If this is your workflow, I can’t really say that it’s wrong…  But, if you send all of your projects to audio post, I can tell you that the person receiving your files would probably prefer to have a cleaner timeline.  Keyframe volume automationOne reason for this is that ProTools will create volume automation between the clips, essentially keyframing the entire track.  So if you’re using keyframe data as fades, then the track will have a zero volume (negative infinity) between clips.

Also, in ProTools we aren’t usually looking at the volume automation, so you may have already pulled the music out with volume automation, but we still see the clip on the timeline for another five seconds.  Basically, it gives a better visual representation of what’s really going to when you use fades as fades and volume automation as volume automation.

So far, I’m sure there hasn’t been any compelling reason for some of you to stop using keyframes in place of fades, and that’s okay.  It’s not one of my most passionate topics.  I personally think it’s easier in the NLE to use fades and if you try to get used to it, you may fall in love with that workflow.  If you’re not married to either process, use the fades.  There’s a real time and place for keyframed volume automation and getting comfortable with it will greatly improve your mixes.  If your comfort with keyframes means that I have to delete a few when it comes to me, then so be it.

Thanks for reading, keep cutting, and please find me on Twitter:


Pan vs. Balance

A big part of mixing audio is panning, which is like steering your audio between left and right (or multi-channel for surround). Sometimes we use panning to isolate stereo sources to mono outputs, which can be the case when making splits.  But what would happen if what you thought was a panner was really a balance control?

In Premiere, what looks like a panner (audio steering wheel), acts more like the two levers of a Green Machine or Pod Racer!  When you want to steer left, you decrease the right – instead of steering the right channel content to the left.  Sound confusing?  Check out this Mixing Minute video!  (and read below to dive just a little deeper)

So, how else could a DAW (Digital Audio Workstation) handle stereo content?  Well, with a stereo panner.  ProTools, for example puts two panners on a stereo track or aux input (submix track).  The default for these panners is hard left and right, but if you want to “pan” the stereo content, you have separate controls for each channel.  In the case of the splits mentioned above, the right panner would be set all the way to the left.

Stereo Panner

Stereo Paner in ProTools

The resulting mono signal from the dual panners in ProTools would result in a significantly higher signal than the technique used in Premiere.  In Premiere, the resulting gain is reduced by a considerable amount whether you place stereo content on a mono track or route stereo content through a mono sub.  But these are topics for another post!  Comments?  Find me on Twitter:  And thanks for reading!

Mixing Minute

Audio Plugins Intro

Audio plugins come in three basic categories: Dynamic (compression, limiting, etc.) Spectral / Filter effects (EQ, etc.) Time Based effects (delay, reverb, etc.) All three are critical for a good mix, but I dare say that no mix is complete without compression and EQ.  So let’s start there. First thing to know about plugins is that you need to buy some!  You say, “My NLE (or DAW) comes with a ton of plugins already…”  Buy some anyway.  In fact, buy the Renaissance (RennMaxx) bundle from Waves.  I’ll tell you why. First, Dynamics.  The R-Compressor is extremely versatile and colorless.  I literally use this compressor every day.  I will go into detail on best practices in a later blog post, but for now just know that for dialog or voice-over, this compressor is easy to use and sounds great.  Also a part of dynamics is de-essing.  The R-DeEsser is quite possibly the best DeEsser available at any price.  It just plain works.  Set it to “Split” and adjust the threshold and range so that it’s working.  The R-Vox and Renaissance Axx plugins are simple, two knob compressors that certainly have their uses as well. Default_PluginsSecond, Spectral/Filter Effects.  The Renaissance Equalizer is my go-to EQ for the same reasons.  It’s colorless and easy to get the results you need.  There are many reasons to use different EQs, but my “desert island” EQ would have to be the REQ-6. Also in the spectral category is the Renaissance Bass plugin.  R-Bass is a really nice effect to use to bring out bass elements that you need, or to prepare low frequency sounds for playback on less-than-full-range systems.  Quality plugin, but not essential for everyone. Third, Time-Based Effects.  The Renaissance Reverb is extremely good.  I have better reverbs, but not even close to this price range.  It’s easy to use and sounds very good.  You also get the IR-L Convolution Reverb, which is more difficult to get a good sound right out of the box, but has capabilities beyond that of the R-Verb. Lastly, as part of the bundle you get Waves Tune LT, which can be used to tune or modify pitch.  I use the “non-LT” (Full) version of Waves Tune all the time to change the inflection of dialog.  For example, if you have to cut a sentence and make it finish on a word, when it actually keeps going, you often need to change the pitch and length of at least the final syllable.  Waves Tune does that well.  Personally, I found the LT version less useful for my purposes, but the full version works for me. So, my first audio plugin recommendation is for the Waves RennMaxx bundle.  Until next time!

Comments?  Find me on Twitter:

Check out more Waves plugins and save 10% with the code from this referral link: