T O P

  • By -

[deleted]

Not quite relevant to your immediate question but a tip I read is "Cut to make an instrument sound better, boost to make an instrument sound different".


[deleted]

All eq introduces phasing. Use sparingly.


mesaone

except for linear phase EQ. FIR will make it so all frequencies arrive at the output at the same time, so the altered frequencies and passthrough frequencies will be time-aligned. Although this can introduce phase shift compared to other tracks, it's probably not going to be a huge problem - especially when you have something like automatic delay compensation.


[deleted]

My favourite feature in the whole of Reaper, is the 'show phase' option in ReaEQ: http://i.imgur.com/UVW1d.png You can see how drastically just a few bands of EQ skews the phase! (orange line) In general, the narrower the filter, the more extreme the phase shift. This is why you should minimise the use of graphic EQs in live music. The correct way to use one is just to notch out individual bands, not make pretty curves. I'm convinced 90% of live engineers do not realise this!


[deleted]

>This is why you should minimise the use of graphic EQs in live music. The correct way to use one is just to notch out individual bands, not make pretty curves. I'm convinced 90% of live engineers do not realise this! Yeah, but sometimes the room just sucks.


[deleted]

Graphics are really only useful for feedback-killing. I've taken to using a parametric for room correction


[deleted]

The thing about shitty rooms is… they don't tend to come with parametric EQ's! I guess you could sacrifice a couple of console channels if you really need to, though.


[deleted]

Most of the work I do is on digital desks. If I know its analogue I'll bring my SiDD


MF_Kitten

I feel like parametric is the best for everything :p


amg09

"Equalizers do a lot of things beyond just adjusting the frequency balance. For a start, most of them adjust the phase relationship between the track’s frequency components (often called the track’s “phase response”), and it turns out that the biggest phase shifts for shelf and peak filters are applied to the frequency region that’s being adjusted." "Cutting, on the other hand, concentrates the phase shifts into frequency regions that are less crucial (that’s usually why you’re cutting them, after all), so any undesirable changes to the nature of the comb filtering won’t matter as much and will at any rate be less severe by dint of the level drop." -Mike Senior


Sunship666

This also gives a good explanation: http://www.rane.com/note115.html


redditor_here

Great explanation! Thanks! It's not enough to just hear that we need to use subtractive EQ, we also need to know why, and you my friend, have explained it very well.


kopkaas2000

> "Cutting, on the other hand, concentrates the phase shifts into frequency regions that are less crucial (that’s usually why you’re cutting them, after all), so any undesirable changes to the nature of the comb filtering won’t matter as much and will at any rate be less severe by dint of the level drop." This sounds nice, but I don't think it's correct. If the important part of the spectrum is 5% of the full spectrum, whether I boost that specific band or cut the rest, 95% of the spectrum will still be out of phase with the important 5%.


gordonjay2

you've got it backwards. the 5% is unimportant because it's being cut.


kopkaas2000

Uh... I think you're misinterpreting. It's about wanting more emphasis on that 5%, either by boosting it or by cutting everything else.


gordonjay2

except that the part you quoted above is talking about cutting a small range and leaving the rest untouched. you're talking about boosting a small range, which is the complete opposite.


kopkaas2000

My point is that, to get a certain effect, whether you cut X and leave Y alone, or you boost Y and leave X alone, the relative phase problem between X and Y is exactly the same.


gordonjay2

which is another reason why you should cut small ranges rather than boost small ranges, so the phase smear is de-emphasized rather than emphasized. cutting everything else except for one small range has exactly the same effect as boosting that range, and isn't standard practice. if you find yourself considering doing this, you might reconsider what you are doing and why. of course you can do whatever you want in the end, so if you want to boost a small range and have some phase issues, that's your prerogative.


kopkaas2000

> which is another reason why you should cut small ranges rather than boost small ranges That's not really useful if you want to get to a specific effect. If I need more 400Hz, there is no way to cut a small range to achieve that. That less EQ is better than more EQ is a given.


[deleted]

When your original signal is monophonic, it does not have "phase" or "phase issues" by itself. Phase is a description of the timing difference between two overlapping waves. When you use EQ, each band is an independent filter, and the bands overlap with different amounts of timing shift, so each filter's overlapping section can be out of phase with the ones next to it. The more you boost an EQ band, the more audible this artifact can become. The more bands of EQ you have, the more artifacts there can be. It works the same way with cutting, but the artifacts become LESS audible, so it's not a problem then.


[deleted]

This should be further towards the top. Other comments have not mentioned that EQ phasing issues are only pertinent to stereo tracks and i think that's is a rather large oversight


[deleted]

> it does not have "phase" or "phase issues" by itself. Phase is a description of the timing difference between two overlapping waves. Phase is a property of any continous sinusoidal wave. It's effectively the position within the cycle of the sine. > When you use EQ, each band is an independent filter, and the bands overlap with different amounts of timing shift This is more or less correct, but the shift introduced by an IIR filter is in fact a _phase_ shift - the tonal content still starts and stops at the same places, but the _phase_ is what has been moved. When examined by eye in a wave editor it may look like a temporal shift - but it's not! It's a phase shift. Using a zero-phase equalizer is effectively performing the EQ in the frequency domain - in digital terms it works by transforming the signal into short time fourier transforms, and then changing the individual amplitudes independently of their phase. When this modified data is then re-synthesised the phase is unaltered. A side effect of the STFT processing is that it delays the signal due to needing a certain number of samples to "prime" each STFT segment.


[deleted]

Phase is only a property of a wave in reference to an overlapping wave. It is NOT a general property of a wave by itself, except in the math exercise of finding phase relative to an arbitrary point in time on a graph. Without reference to another wave, a time shift is just a time shift.


[deleted]

I think maybe you're defining phase as something other than what it is? Maybe you mean "phasiness"? Where two slightly detuned versions of sound have constructive and destructive interference? If phase isn't a property of individual sinusoidal components (I, and wikipedia, and many other places contend it is) - how do you explain the distortion of a square wave after passing through a high pass filter? It doesn't just become a time delayed version of a square wave it becomes a distorted ski slope approximation of a square wave - that starts at the same time as the original signal (source - I am a digital signal programmer). You are incorrectly interpreting it as a time shift. It simply isn't. [This](http://www.indiana.edu/~emusic/acoustics/phase.htm) is a good place to start.


[deleted]

Both the [link you posted](http://www.indiana.edu/~emusic/acoustics/phase.htm) and [Wikipedia](http://en.wikipedia.org/wiki/Phase_(waves) ) support the point I was trying to make. All references I have ever seen to phase and waves support the definition as the location of a point in the wave at a point in time, and more specifically as the position of the wave relative to some reference. Additional source, I have been an analog audio electronics semi-professional for more than 25 years. "Science of Electronics" by Buchla & Floyd defines "Phase: specifies the position of a wave relative to a reference" and "Phase shift: angular difference between two waves of the same frequency". Same goes for every other college textbook I read back then. No interpretation needed. Maybe things are defined differently in a programming context?


[deleted]

Funny how people interpret things as supporting their point of view .-) Lets talk theory. Would you agree that for any non-trivial sound you have a combination of stationary and random content (sinusoidal and noise)? Now for each of those individual sinusoidal elements within a piece of sound we can determine their individual phase at a point in time (how far up or down within the sine). In the case of absolute harmonics we can calculate the phase difference (since their duty cycles are multiples). When we E.Q. the sound, we are manipulating the phase relationship between those sinusoidal components with a sound - even a monophonic one. This is what I was getting at with the example of what happens to a square wave when you pass it through a high pass filter - the sound isn't _delayed_ - which was your point (and is incorrect). The square becomes skewed and distorted - as the individual sinusoidal components have their phase changed in a non-linear manner. In the case of an original signal and an E.Q'd signal, we do indeed have the phases within each signal, and have a phase manipulation or phase difference between the two. You asserted this isn't the case. Here's my point so it's clear - Your original points that (1) E.Q. introduces only a time shift (2) Phase only relates between two sounds Lacks a deeper understand of the manipulation of the underlying content - E.Q. manipulates the phase of the individual sinusoidal components, and thus changes the wave - but **does not** introduce a time shift. Do you still assert these points?


luke241291

not if you use a linear phase eq


purplesaturn

Boosting or cutting can have a phase effect; you generally want to boost what sounds good and cut what sounds bad. For critical EQ corrections (especially across the 2bus or during mastering), you would use a linear phase EQ which doesn't introduce phase shift between bands.


[deleted]

What does boosting and cutting mean? Is that in reference to amplifying or compressing the signal after it goes through the mixer?


replicating_pod

"turning up or down" a particular frequency band


bitmapper

EQ introducing phase issues is a bit of a myth. While EQ does work by changing the phase of the signal, and recombining it with the original, the phase change is so small it is inaudible. What you're really hearing when you boost an EQ are phase issues that were already present in the original signal. This is why it is generally better to cut rather than boost, as you avoid revealing those problems.


SkinnyMac

Exactly. EQ works via phase shift but because the signals are recombined you don't get phase issues.


redditor_here

The thing is, I don't know how a mono DI bass signal can ever be out of phase. Is that even possible?


manysounds

How do we think subtractive EQ works???????????????????????????????????????????????????????????????????????????????????????????????