How Not To Destroy Your Guitar Tone [Video]

| Audio Example, Mixing, Plugins, Pro Tools, Tips, Video

Picture this: you’ve spent hours recording and capturing the perfect guitar tone from your amp. But come mix time you take a listen back to your tracks and they sound totally thin and unrecognizable. You swear the mics sounded great by themselves when you set them up, but now when they are all playing together in the mix your tone has been completely obliterated! Where did it go? We’re about to find out…

FacebookTwitterGoogle+Share

Ready to transform your recordings and mixes?

Sign up now and get my BEST material absolutely FREE.

16 Responses to “How Not To Destroy Your Guitar Tone [Video]”

  1. Martin

    This makes me laugh, as it reminds me of the exact same situation I was in when I first started recording. I was recording my Line 6 amp and had an SM57 on the front and a 57 on the back capturing the low end of the speaker cone. This was the first time ever that I discovered phase. I was baffled when I was combining both mics and thinking “why does this sound so bad?! My guitar tone sounded fine just a minute ago!”

    Definitely a great tip. I’d say phase is probably the most important thing when recording… that along with not clipping the signal on the way in and the quality of the instrument itself.

    Reply
  2. Henri Vlot

    Here is a funny trick I learned at the homerecordingshow.com when multimiking a guitar amp: place the first microphone wherever you like, or wherever sounds best, then when placing the second mic search for the spot where the two mics sound horrible together, then flip the phase and you have a really great sound. It’s just easier to hear an awful sound, you get what I mean?

    Reply
  3. Rob

    This is super helpful!
    Once again, I knew that this happens and this issue exists but had no idea how to fix it or what was going on theoretically. Thanks again Graham!!

    Yes get well soon!!

    Reply
  4. Marty

    Hi Graeme.

    I’m assuming it’s possible for two tracks (same performance, different mics) to be out of phase by something other than “complete opposite”?

    If I was to try plotting a simple illustration of what you dealt with, this is what I come up with (Example 1):
    http://www.wolframalpha.com/input/?i=sin%28x%29%3B-sin%28x%29
    Basically two sine waves (easier to graph than a distorted guitar!) but perfectly out of phase with each other.

    But what about this (Example 2):
    http://www.wolframalpha.com/input/?i=sin%28x%29%3Bsin%28x%2Bpi%2F2%29
    This is basically two sine waves out of phase by 90 degrees. In this case, no matter how you invert phase on either track, you’ll always end up with bands during the wave where one signal is positive and the other is negative, therefore resulting in some degree of cancellation.

    In this plot, I move them ALMOST completely out of phase with each other (Example 3):
    http://www.wolframalpha.com/input/?i=sin%28x%29%3Bsin%28x%2Bpi*0.8%29
    In this case, inverting phase would definitely improve the mix, but it would still have a small degree of cancellation.

    In this plot, I move them only SLIGHTLY out of phase with each other (Example 4):
    http://www.wolframalpha.com/input/?i=sin%28x%29%3Bsin%28x%2Bpi*0.2%29
    In this case, inverting phase would make things worse, but again there is a small degree of cancellation.

    Although toggling phase on a track will give you two options (and you select the better of the two with less cancellation), would it not help to also “shift” one of the tracks so that the waveforms are in perfect alignment?

    I speculate possible reasons as being:
    - Mics at different distances
    - Mics at different orientations
    - Incorrectly wired/patched leads
    - Different A/D latency per mic (if using multiple interfaces)

    I’m not very hands-on these days, but I still enjoy learning/thinking about this stuff. Thanks for a great video, which inspired me to think/write this!

    P.S. When looking at the WolframAlpha links, there is a section called “Parametric Plot” under the regular “Plots”. This is quite good for visualizing how much time is spent “in” or “out” of phase.
    Top-Right means both signals are positive = Good
    Bottom-Left means both signals are negative = Good
    Bottom-Right or Top-Left means one signal is positive, other is negative = BAD
    My Example 1 plot is worst case scenario where it just ends up with a line through the BAD sectors. Flipping phase would make it all good!
    My Example 2 plot is equally spread out between good and bad. Flipping phase would make no difference.
    My Example 3 plot is mostly bad, with a bit of good thrown in. Flipping phase would make it mostly good, with a bit of bad thrown in (therefore recommended).
    My Example 4 plot is mostly good, with a bit of bad thrown in. Flipping phase would make it mostly bad, with a bit of good thrown in (therefore not recommended).

    Reply
    • Andrew Bauserman

      Marty – Cool use of WolframAlpha :) And your analysis is correct.

      The phase (aka polarity) button Ø is equivalent to swapping pins 2 and 3 on an XLR cable. Everything that was positive voltage becomes negative, and vice versa. As a result, the signal is 180º out of phase across the entire waveform (all frequencies). As your examples 1-4 show, when the sound is out of alignment but not fully reversed, inverting one signal relative to a second often makes then sound “better” together (closer to in-phase), albeit not perfectly matched.

      An alternative method is “time alignment” — delaying one track a few samples compared to another so that the peaks and valleys line up. If you have a near and far mic on a single source (like a guitar cabinet), this is the more accurate way to line up the signals. But as it’s a bit more tedious to set up, many audio pros try the phase/polarity button first to see if that gets the signals “close enough”.

      A third approach, which is more “by ear”, is to use EQ and crossover circuits. Any EQ or crossover not specifically advertised as “linear phase” will change phase relationships of various frequencies. Applying different EQs or crossovers on 2 tracks of the same signal can produce comb filtering — or conceivably reduce existing filtering on certain frequencies based on phase or time alignment issues.

      You can find hardware and software solutions that employ each of these methods, and can replicate them with built-in features on most DAWs.

      Reply
      • Marty

        Thanks for the additional info. You just blew my mind with thinking about how EQ changes the relationship as well!!! I’m going to dedicate some thinking time to that…
        :-)

        Reply
  5. Riad

    Okk, now I know what a phase problem is and how to overcome it..Definitely going to double mic my guitars from now on, without phase issues..
    That’s Awesome.. :D

    Reply
  6. Anodine

    Or you could just send the signal through a virtual amp modeler on a sidebus and mix it in like Graham explained in another video. That way you won’t have any cancellation issues. Worked for me. Used a 4×12 low emulation to give guitars a fatter sound on my album.

    Reply
  7. Henri Vlot

    Thanks for the explanation, Marty.
    Your post really helped me out with understanding phase scope, another great tool to get your phase right!
    I just couldn’t get the hang of it, but now I can finally use them (:

    Reply
    • Marty

      Glad it helped! Just wish I had more time in my life so I could actually apply some of this in practice. Alas, I haven’t recorded anything in a few years due to other projects taking priority. Soon tho!!!

      Reply
  8. Jake

    If you mic a guitar performance with two different types of mics — like in this video, where the dynamic and condenser mics pick up the sound in very different ways — can you use the two recordings for double tracking? That is, double tracking typically involves two separate recordings of the same part, because the differences between the two are what make the technique work. Is it possible to use the two different mic recordings of the same performance to get the same, or a similar, result? Or are the tonal differences between the recordings not enough?

    Reply
    • Graham

      Not really. It will sound like a mono source, just with tonal issues. It’s best to overlap these two mics to get a new single sound as a blend. OR record two separate passes of the guitar part, i.e. a real double.

      Reply

Trackbacks/Pingbacks

  1.  Phasenauslöschung am Beispiel einer E-Gitarren-Amp-Aufnahme erklärt. | Pro Audio News & Tutorials | mixingroom.de

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Read previous post:
Why Most People Will Never Be Good At Recording

This might be the most important post I’ve ever written, so listen up. If you’re here on this site, it’s...

Close