SoundForEditors_540-bottom_safe

The Quick answer on levels!

Q1: While I’m editing, should I normalize each clip to -6 (or 0 or -12)?
Q2: What should be my target level on the first pass?
Q3: How about the final pass?

Before we can put a number on the target, there are a few things that need to be clarified. First, you’re probably looking at the wrong meter to answer that question. Second, the number you settle on depends on your delivery method. Broadcast is different from web, for example. Third, are you looking at peak level, or some version of average level – which will give you more of an indication of loudness?

For an in-depth look at all of this, check out part 1 of the full article on AOTG.com.

For a quick answer, try this:
A1: I personally never normalize. (yes, I said “never” – and I meant it).  What do I do instead?  First step is to adjust the level of each clip non-destructively.  Each NLE has its own terms:  Avid uses Clip Gain, which translates to ProTools non-destructively via AAF.  Adobe Premiere has two methods at the clip level.  The non-destructive method is to adjust the gain in the sequence while viewing “clip keyframes”.PPCC Clip Keyframes  The other method is called “Audio Gain”, which is non-destructive inside of Premiere, but becomes permanent when exporting, which can be problematic.

A2:  Your target level on the first pass can be a little lower than your deliverable level, to allow for gain in the output stage, or it can be the same.  It’s a personal mixing choice.  While editing, it is important to maintain a consistent average level, which can be adjusted globally later in the process.

A3:  If you have a loudness specification, use that as your guide.  If you don’t, then you just have to know what kind of meter you have and mix accordingly.  I have included some general suggestions below:

U.S. Broadcast:  -24LKFS / -2dBTP (True Peak) or approximately -24dB RMS with peaks around -6dBFS (to be safe)
Web video:  -18LKFS or -18dB RMS
Podcast audio:  -16LKFS

The bottom line:  Mix using average levels, because what we care about is how loud something sounds, not where the digital peaks are.  Control your peaks using a limiter, and hopefully other dynamics processors along the way.  Stay tuned for more on Audio Levels & Metering!

Update:  Part 2 of the AOTG.com article is up!  It contains a review of metering and FOUR videos for your viewing pleasure!

SoundForEditors_Title_720

Audio Tracks in Premiere

Properly setting up audio tracks before getting started will streamline your edit process, give you more flexibility, and get you ready for exporting files quicker.  From setting up a multi-channel sequence for splits or Mix/Mix-minus to naming tracks and audio routing.  You will get something out of this quick tutorial.

Let me know what you think on Twitter.

If you like this post, please share!


SoundForEditors_Title_720

Mixing in Premiere

Whether finishing in the NLE or just for approvals, every edit needs some amount of audio mixing.  You’re doing it whether you pay any attention to it or not.  So how do you approach a mix?

Step 1: Level it out

Step 2: Discover Track-based plugins

Step 3: Learn a few specifics about EQ and compression

This quick tutorial covers a great deal of ground in a fly-by approach, from how to level out your program and an intro on metering to track-based plugins including EQ and compression.  I even touch on the benefits of submixing. For more on metering, check out my article on AOTG.com

Looking for a little more in any particular area?  Hit me up on Twitter.
  If you like this post, please share!


What the POP?

How to fix the “POP” during recording or in post.

Before we can fix the POP, we need to understand where it came from.  To do that, let’s quickly look at microphone theory.  To oversimplify, sound waves hit a microphone and cause slight physical movement of the diaphragm, which is translated to electricity via an electromagnet.  If your mouth happens to be right in front of the microphone diaphragm and you have a loud plosive “P” or “T”, it moves the diaphragm substantially more than it should.  That comes through as a very aggressive (mostly low frequency) “POP”.

The best way to fix plosives is to avoid them in the recording.  A pop filter helps reduce plosives by allowing the audio to pass through while dispersing the air, reducing the amount of air that hits the diaphragm.  If you don’t have a pop filter, try moving the microphone position so that it is still pointing at the mouth, but the mouth is not necessarily pointing at the microphone.  Another trick is to put a finger between your mouth and the microphone to disperse the air before it hits the diaphragm.  This may not seem conducive to a natural performance, so I prefer one of the other techniques.

If you do need to fix it in post, just think about what you’re trying to get rid of.  First, the overall energy at that point is too high, so a gain reduction is necessary – through automation and/or dynamics processing.  Second, a High Pass Filter (HPF) will help roll off the very low frequencies, reducing the strength of the plosive.  Now combining those two concepts, we can use a multi-band compressor to drastically reduce the energy of the low band without affecting the rest of the frequency spectrum whenever the low frequency energy is too high.  I’ve actually listed these in the opposite order of how they should be processed.  Let’s check it out.

NOTE:  Since the writing of this article, iZotope has released RX5 Advanced, which has a one-click solution for removing plosives that is WAY better than all of these techniques combined!  This article is still valid, but if you really need to remove plosives in post, the BEST way is RX:

https://www.izotope.com/en/products/audio-repair/rx/whats-new/

Comments?  Find me on Twitter:  https://twitter.com/MichaelAudio

2_VO

What Makes Editing VO Different From Editing Dialog?

A good dialog edit is a work of art.  You balance room tones (or world tones); you time every edit seamlessly around breaths and natural speech patterns; you do everything within your power to make the listener believe that it was all spoken at one time, by one person, in one place.  There are no holes in your edit – each one filled perfectly with the correct room tone, whether gathered from the takes themselves or deliberately recorded on set in the exact environment that you are matching.

Voice-over is a different thing.  Breaths between sentences and paragraphs are cut.  Even breaths between phrases may be cut.  Mouth noise is reduced or eliminated.  In most cases, the VO will sit over top of music and/or NAT sound, so you want it to be as clean and clear as possible.  When you cut out a breath or noise, decide whether you need to close the gap where that breath used to live, or if you need to leave it alone or extend it.   For example, if a 5 frame breath is taken in the middle of a sentence, it’s possible that at least three of those frames can be cut.  It’s also possible that the same 5 frame breath at the beginning of a sentence should become 15 frames of silence.  The rhythm of the performance is now yours to decide.

1_dialog2_VOWhile you may be tempted to leave your VO seemless like dialog, you may find that it actually sounds better to trim every region to only what is needed.  Not only does this aid in the elimination of mouth noise and breaths, but it causes you to focus your attention to every detail of the voice-over performance and you’re more likely to slide phrases around and put the pauses where they can better help tell the story.

It’s possible that you know more about how this should be performed than the person who voiced it (unless your VO talent IS the producer, writer, etc. – then you may defer to them).  Opening up between sections, paragraphs or sentences and tightening up (or loosening) list items or detail points are all ways that you can greatly improve the power of the voice-over in the story-telling process.

If you’re recording the VO, give a little “pre-production” thought to the rhythm that you need for the piece.  This could influence the casting of the VO talent or just coaching them during the recording.  Of course, remember proper etiquette if you’re working with a director or producer and don’t step on any toes.

Some of this thought process can be applied to dialog as well, but just remember when editing dialog to fill the holes!

Comments?  Find me on Twitter:  https://twitter.com/MichaelAudio

C6

Need more music AND dialog in your mix?

Ducking with Waves C6

Today, I’m going to discuss how to use a ducker in any post production audio mix and specifically ducking music with dialog using Waves C6 Multi-Band Compressor with a side-chain.  I use duckers in virtually every mix, with a slight variation on how much gain reduction I allow them depending on the program and mix aesthetic.  I have developed a really nice technique for getting best results without ever even noticing that anything is happening (which is exactly what you want).  If you’re comfortable with the concepts of ducking and side-chaining, feel free to skip down to the “Technique” section.  This article does assume a basic understanding of compression and even multi-band compression to some degree.

Quick Summary of Ducking

Goal:  Whenever there is speech, we want to decrease the music ever-so-slightly to make room for the speech.  How do we do this?  One way is with a ducker.  A “ducker” is just a compressor that “ducks” one source when another source triggers it.  What we do is put a compressor on the music track or submix with a side-chain (see below) input from the dialog.

Basic Ducker Settings

The threshold control will be set low (around -35dB or -40dB) so that whenever there is actual speech, the compressor is triggered.  The ratio is set extremely low (in the range of 1.1:1), in order to have a gain reduction during speech of between 1dB and 6dB, depending on mix taste.  Some compressors use Range controls instead of ratio – this makes ducking even easier, since controlling gain reduction is the goal.

What is a side-chain?

Side-Chain

A side-chain is simply a signal (in our case, dialog), which is sent into the “side” of the compressor (or key) to cause gain reduction on another signal.  In our case the signal passing through the compressor is the music (goes in, comes out processed), and the dialog is simply used to determine when the gain reduction should happen (keying).

Technique

Until today, I would use two instances of the Renaissance Compressor for ducking music with dialog or voice-over.  The first would be set to a quick release (5ms) and would give the impression of increasing the level of the dialog without actually raising anything.  It does this by making imperceptible gain reductions in the music whenever speech was present.  The fast release time allows more aggressive gain reduction when needed.  The second ducker has a slower release time (400ms) which makes the music feel a little softer during the speech, but is still able to do it without any noticeable change in music level.

Another common effect for the music bus might be an EQ to reduce upper-midrange frequencies in the music to make room for the voice.  The EQ would, of course, be in effect at all times.  I have thought about sending a feature request to Waves to add a side-chain to the C4 Multiband Compressor so that I could set it to reduce only the upper-midrange band  when the voice was present.  This would eliminate my need for the EQ as well as one or both of the RComp duckers.  They must have read my mind and then went way above and beyond with the C6!

Waves C6 Multiband Compressor

The C6 is set up like the C4, or most 4-band compressors, but then they added two floating bands with separate parametric control, which you can place anywhere along the spectrum over top of the original four bands.  That’s pretty cool in and of itself!  Then they added the side-chain with more control than you ever imagined you might need.

You can choose to trigger each band separately from either “Internal” (in our case, the music) or “External”, which is the SideChain (in our case, the dialog).  I had no idea!  So what I ended up doing was using the regular four bands like I might use the C4, to gently balance the various tracks of music and increase overall continuity.  For example, if one piece of music has a strong shaker or high hat, the high band compression can help keep that in check (speed up attack and release on that band), or the low band might keep a heavy bass in check.  Then, I put the two floating bands in “External” mode to key them from the dialog.  I put one band around 2kHz, wide Q and 6dB of gain reduction.  The other band, more narrow with 2dB of gain reduction down lower in the voice spectrum.

In the end, I didn’t use any EQ on the music sub, but I did keep my fast-release RComp ducker, because I just really like the performance of it.  I could have done the whole job with just the C6.

Recently I was asked to pick one audio plugin as a “desert island” plugin.  I now have one.  If you’re looking for easy-to-use plugins that sound great, I still recommend the Renaissance bundle from Waves, but if you’re ready to step up to an incredibly versatile audio processor, give the C6 Multiband Compressor a spin.

Check out more Waves plugins and save 10% with this referral link:  http://refer.waves.com/OZry

Comments?  Find me on Twitter:  https://twitter.com/MichaelAudio

SoundForEditors_Title_720

Keyframes vs. Fades in NLE

So, we had some great discussions in #postchat (5/28/2014 Transcript here:  https://storify.com/EditorLiam/postchat-all-things-audio-w-michaelaudio) regarding the use of keyframes for volume automation in the NLE and how it is received via OMF/AAF to ProTools (or other DAW using OMF/AAF).  Also, when to use fades vs. keyframes and how clip gain plays into that.  I’d like to expand on all of that just a bit.

All the major NLEs will translate fade information reliably through OMF or AAF (I’ll explain the difference between these formats in another post).  First thing to realize is that each NLE treats gain (volume) differently.  I’ll briefly summarize here:

Avid (Media Composer)

MC uses clip gain as well as keyframe automation, which both come through the OMF/AAF non-destructively.  The newer versions of ProTools use clip gain as well, which works exactly like it does in Avid.  (Older versions of PT used to convert clip gain to volume automation, and it could sometimes drive me crazy!  Editor would cut a clip to apply clip gain with a crossfade, and I would get the crossfade AND the keyframed automation – it was bizzarre.  Now, I get the clip gain the way it was intended and it’s beautiful!)  This is why the workflow of applying clip gain and using crossfades is so solid.  It is less “audio guy” like, but it’s a great workflow from Avid to PT, and I’ve found myself using it more and more.

Final Cut Pro 7

FCP versions 6 and 7 use keyframe volume automation, but no clip gain.  Very straight forward.  If you’re on a version earlier than 6, your options are very limited.  FCPX is a different animal, and there are great resources out there to help you.

Adobe Premiere Pro CC

Premiere is more complicated.  You can apply gain to a clip, which is destructive.  You can use clip gain right on the track to add up to 6dB, and you can also keyframe on the track (same as in Track Mixer).  Clip keyframes will translate via OMF, and they can be smoothed with crossfades.  Even though the interface may show a sudden jump in gain, the clip gain comes before the crossfade and is actually smoothed out by it.  If this sounds confusing, please leave a comment below.  Much more information is coming on this topic.

The use of keyframes as fades

It is common practice for editors to use keyframe volume automation to ramp up the audio at the beginning of a clip (music, NATs, etc.) and to ramp it down before the end of the clip.  If this is your workflow, I can’t really say that it’s wrong…  But, if you send all of your projects to audio post, I can tell you that the person receiving your files would probably prefer to have a cleaner timeline.  Keyframe volume automationOne reason for this is that ProTools will create volume automation between the clips, essentially keyframing the entire track.  So if you’re using keyframe data as fades, then the track will have a zero volume (negative infinity) between clips.

Also, in ProTools we aren’t usually looking at the volume automation, so you may have already pulled the music out with volume automation, but we still see the clip on the timeline for another five seconds.  Basically, it gives a better visual representation of what’s really going to when you use fades as fades and volume automation as volume automation.

So far, I’m sure there hasn’t been any compelling reason for some of you to stop using keyframes in place of fades, and that’s okay.  It’s not one of my most passionate topics.  I personally think it’s easier in the NLE to use fades and if you try to get used to it, you may fall in love with that workflow.  If you’re not married to either process, use the fades.  There’s a real time and place for keyframed volume automation and getting comfortable with it will greatly improve your mixes.  If your comfort with keyframes means that I have to delete a few when it comes to me, then so be it.

Thanks for reading, keep cutting, and please find me on Twitter:  https://twitter.com/MichaelAudio

SoundForEditors_Title_720

Pan vs. Balance

A big part of mixing audio is panning, which is like steering your audio between left and right (or multi-channel for surround). Sometimes we use panning to isolate stereo sources to mono outputs, which can be the case when making splits.  But what would happen if what you thought was a panner was really a balance control?

In Premiere, what looks like a panner (audio steering wheel), acts more like the two levers of a Green Machine or Pod Racer!  When you want to steer left, you decrease the right – instead of steering the right channel content to the left.  Sound confusing?  Check out this Mixing Minute video!  (and read below to dive just a little deeper)

So, how else could a DAW (Digital Audio Workstation) handle stereo content?  Well, with a stereo panner.  ProTools, for example puts two panners on a stereo track or aux input (submix track).  The default for these panners is hard left and right, but if you want to “pan” the stereo content, you have separate controls for each channel.  In the case of the splits mentioned above, the right panner would be set all the way to the left.

Stereo Panner

Stereo Paner in ProTools

The resulting mono signal from the dual panners in ProTools would result in a significantly higher signal than the technique used in Premiere.  In Premiere, the resulting gain is reduced by a considerable amount whether you place stereo content on a mono track or route stereo content through a mono sub.  But these are topics for another post!  Comments?  Find me on Twitter:  https://twitter.com/MichaelAudio.  And thanks for reading!

Best Reverb plugin I’ve ever heard!

Have you ever tried to get realistic sounds from a digital reverb, and been disappointed at how granular or “digital” they can sound?  Well, I want to introduce you to the reverb that changed all that for me!  It’s called Altiverb, from a European company called Audio Ease.

It’s not just any reverb, it’s a convolution reverb, which means that the reverb algorithms are based on actual sample recordings of real spaces.  These guys travel the world to bring you real world spaces in which you can place your audio.  In addition, they’ve given you the ability to record your own IRs (Impulse Responses) so that you can recreate the room sound from anywhere to use with audio recorded somewhere else.  <<I will DEFINITELY post more detail on this topic in the future, but for today I will focus simply on the incredible effect that one of its presets can have on a piece of music.>>

Reverb 101:  Quite often the way you use a reverb, like many time-based effects, is to send the dry (original signal) to the input of the reverb and bring the “wet” or reverberated signal back to be mixed with the original (“dry”).  You essentially control the “wet/dry” mix by deciding how much of the reverberated signal you want to mix with the original.  This determines the amount of the effect and the perceived distance between the source and the listener.

How is Altiverb different?  Well, Altiverb is different in many ways, but today I’m focused on how you might use it.  Altiverb sounds so good, that most of the time, I use Altiverb as an insert (entire signal passes through the device and wet/dry is controlled in the plugin) on either an individual track or a submix.

I was working on a piece of music with Tanya Ostrovsky, an amazing musician and composer, where we needed to come up with a pirate theme.  Her music was perfect, but I wanted to make  it sound as if we had hired musicians and played in an incredible concert hall.

I routed all my strings to a submix and inserted Altiverb.  I chose the Concertgebouw Concert Hall in Amsterdam, Netherlands.  I adjusted the wet/dry mix to taste, and… amazing!  It sounded so good, I sent the percussion and flute to the same sub and renamed it “orchestra” instead of “strings”.  It’s essentially my entire mix running through this reverb.  If you’ve ever used reverb this way (which, typically you shouldn’t), you may have used a wet/dry mix of anywhere from 6% to 20%, but in this case I am using over 30% wet/dry.  It would be extremely rare that I would use that much reverb in this type of application with any other plugin.  Listen to my example below.

Comments?  Find me on Twitter:  https://twitter.com/MichaelAudio