Jump to content

Basic Mastering of a Live recording!

Rate this topic


Teralus

Recommended Posts

My work laptop has Adobe Audition, so I am thinking I will be able to use that to do some basic mastering on my recordings. Does anyone have any suggestions on useful things to apply to recordings. At this stage I am thinking I will probably need to normalize my recordings - they are a little quiet when I use the battery box. And this seems easy enough to do.

Anything else?

It seems like such powerful software, and I don't really know how to use any of the features...having said that I don't want to mess with my recordings just for the sake of it! What about equalization? What about this Binaural Auto-Panner – I have binaural mics so it seems like something that would be useful?

I am doing some reading – so I am not looking for someone to hold my hand through it, I am just looking for some suggestions experiences etc

Cheers

Link to comment
Share on other sites

Everything depends on the quality of your original recording. If it's good to begin with, you don't want to mess with it. Every bit of processing is a change in the music you captured.

When classical music sound-quality geeks record, they often proudly proclaim that all they've done is set up a pair of high-quality mics and record the signal at a very high bitrate or high-quality analog. No post-processing at all.

But you're probably not recording a symphony orchestra in a soundproofed studio with $2000 mics. So if your recording has specific problems, you can compensate for them.

Normalize smooths out the peaks and valleys of a recording. Before you do that, with your quiet battery-box recording, I'd suggest plain old Amplify: boost the whole recording equally. Try a little section (highlight it) at various settings to see how much you want to amplify it.

Then, are there wild peaks and valleys? Some live music has them, some doesn't. If yours does, then go ahead and normalize.

Equalization is your tone controls. Too much bass? Lower it. Can't hear the vocals? Try boosting around 100-600 Hz. Sound dull? Push up some of the highs. Does your mic favor certain frequencies and cut back on others? Change the EQ to reverse the problems.

Real mastering engineers have this down to a science and examine every instant of the sound. But the more you tweak, the less fidelity you have to the original recording. An EQ fix that will probably help is to take down the bass, since every concert seems to have too much bass. Beyond that you'll have to experiment.

There are some tricks you could try. Noise reduction will analyze a sample of room noise (if you have one with no music) and then remove that same spectrum of sounds from your music. It would take some of the music with it, however. Filtering gives you some very precise EQ: if there's a conversation nearby and you can figure out what frequency the people are speaking at, you could make a notch filter for that annoying voice. You can also do fade-ins and fade-outs of applause.

A lot of the other stuff in Audition is for studio work: adding effects (flanging, phasing, delay, reverb) to instruments, moving things around in the stereo mix, and similar things that are inappropriate for a live recording. Panning (as in panorama) is moving back and forth across the stereo field. If you were doing a studio production where you wanted some synth noise to swoop around the room, you'd use it. But for this....well, you can always Undo.

Link to comment
Share on other sites

A440, i think you're confusing normalizing, which is basically just a precise, automated way of finding the peak levels and amplifying the whole thing to a different peak level - it doesn't mess with dynamics at all - with (hard) limiting, which is a kind of dynamics processing and can be helpful to even occasional peaks out. Whatever you do, always keep the original.

Link to comment
Share on other sites

OK, I admit it, Normalize has always confused me because Nero, for instance, will "normalize" a bunch of tracks to sound smooth in sequence, which means boosting some and lowering others: altering dynamics. But apparently the term is used two different ways.

This is from

http://homerecording.com/normalizing.html

"Normalizing, as far as Sound Forge or other digital audio editors are concerned, simply means to adjust the peak volume of a selection to a known value. Generally the recommended maximum is -0.5 dB. ...

Normalizing a set of tunes to be burned to CD, however, means something slightly different. Here it implies that you're adjusting the average volume of those songs so that they will all sound about equal."

So apparently with Audition and other sound editors, Normalize is like an optimum setting of Amplify. Good to know.

By the way, I just stumbled across this:

http://web.archive.org/web/20030201093835/...m/articles.html

Looks like a wealth of information there.

Edited by A440
Link to comment
Share on other sites

As I see it, there are three basic kinds of normalising:

1 - and this goes with A440s first listed def above, is to increase the level of a track so that its peak levels are just below 0dBfs [digital maximum]. This is specifically called peak normalisation and does not involve dynamics processing other than a single volume change to meet the new peak level, which occurs without the possibility of clipping distortion because it depends on analysing the entire section to be processed first. There is no compression involved.

Almost -no- consumer software, and -no- form of realtime normalisation as performed by directx or VST plugins use this method.

2 - to measure the RMS peak level of the section to be processed and then increase levels to match a specified RMS level, dynamically compressing the signal if levels exceed a certain amplitude. This is specifically called RMS normalisation by most who use it.

This is typically the most even-sounding way to do normalisation, though if your levels vary greatly from track-to-track [i.e. mixing 80s or early 90s-mastered CD-sourced music with the current bitpushed garbage that nearly all CDs contain] the results can be less than pleasing, with "newer" tracks falling dramatically in volume and older tracks often still being "too quiet", depending on the exact settings used.

This is also offered by software like sound forge, whose "normalisation" dialogue has the option for either peak or RMS normalising. Both methods require analysing the *entire* section to be processed first, and can not be done in real time.

3 - to attempt to match levels "instantaneously" to a specified or pre-set level, usually [again] by RMS processing, though either RMS or peak can be used as well as both in combination. This is simply a form of dynamics compression, and is probably the most common type of "normalisation" used by non-audio professionals. I would go so far as to say that this is what most non-pros are referring to when they say "normalising," and is, IMO, the absolute least-deserving method of the title. And yes, I'm using the term 'pro' pretty loosely.

This is basically a post-production equivalent to using AGC when recording. It typically involves raising levels through quieter passages to meet a minimum RMS amplitude as well as dynamic peak compression, which lowers levels above a set threshold by a ratio that changes depending on how high the input level is above the threshold, usually maxing out at infinity:1 [brickwall] limiting.

This can be done in real time as it relies on flying by the seat of its pants to work in the first place.

A good example of the last method there is iTunes' "Sound Check" option, which dynamically compresses all tracks played or burned. iTunes, Nero, and virtually all other burning software that offer an option like this have pretty liberal compression settings which are best-suited to use with all the same genre of music or age of recordings, for the same reason as I mentioned above - the wider the range in what you're assembling as a compilation, the more difficult it is to match everything without making all of it sound like crap.

In any case, the last method is the -last- method I choose to use, simply because it tends to not work well. It's often easier spending an extra few minutes to match tracks' overall volumes by ear by jumping back and forth between them than it is to rely on a tool that gives you no access to any of its settings, with the foreknowledge that its processing is purposefully middle-of-the-road. For example - I only use this if I'm making a one-off disposable compilation to listen to on a road trip, when I want to spend the least time possible building the compilation. For anything else I do it manually and carefully, because I'm fussy and I actually do care about how bad the results generally are doing it this way.

In a possible move to eliminate some of the confusion, many companies have started renaming their plugins as "maximisers". These are basically optimised bitpushing [limiting] tools.

I would hazard to say that purists pooh-pooh most forms of maximising or normalisation because any change in dynamics is basically messing with the source too much to be considered acceptable.

My own opinion is that if your source material is fairly well-balanced in terms of levels, there's no need for dynamics processing at all; peak-normalisation is fine as long as it's followed by proper dither at some point [as altering levels digitally in any way always induces a certain amount of error, usually in the form of aliasing distortion].

If your source wander considerably, then RMS normalising is possibly a good idea, if carefully applied with manual settings by someone who knows how compression actually works and can avoid its pitfalls.

Lastly, even I use a maximiser [mostly for its high-quality dither as the final processing stage] for most recordings now, because much of what I record involves rather extreme dynamics to begin with. Liberally applied [i.e. so that the total number of bitpushed sections totals hopefully less than about 1-2% of the whole recording] this doesn't do significant damage, and can greatly increase the average level of the recording without damaging the overall dynamics.

The key is to know what the specific tool is doing, and not to go overboard. Compression is very easy to abuse, and people tend to forget that there's such a thing as listening fatigue - which compression helps induce faster.

Experiment with settings, read up on what compression, compansion, and expansion actually are [A440 lists an excellent resource for this above, and even wikipedia has good stuff on related topics], and of course - use your ears and LISTEN.

And lastly - to those who like to bitpush the crap out of their recordings, my suggestion is that they turn up the volume knob instead. It has the same net effect [louder overal levels] without the distortion or limiting, and maintains the dynamic range of the recording rather than destroying it utterly.

To sum up:

Peak normalising = level change with no compression or clipping [used correctly, at least]]

RMS normalising = level change with dynamics processing if peaks exceed 0dBfs [or lower, depending on the algorithm/settings used]; dynamic compression basically = bitpushing if levels are consistently high.

"Normalising" as done by most consumer and burning software = the post-pro equivalent of AGC, with recording levels slightly boosted to bring up quiet parts and everything above a certain level compressed with a dynamic ratio which eventually meets brickwall limiting and/or bitpushing. Generally speaking, this will give the worst overall results of any method available, but is the fastest to use since it involves no user-accessible settings or auditioning.

Cheers.

P.S. if anyone can think of a way for me to use the word "levels" more time in a single sentence than in the above, go for it. ;)

As long as it's not complete gibberish, I guess.

P.S. if anyone can think of a way for me to use the word "levels" more time in a single sentence than in the above, go for it. ;)

As long as it's not complete gibberish, I guess.

Oh, and

P.P.S. you might consider changing the topic of this thread to "what is normalising" since that's what it's about - relevant topics mean people can find what they're looking for.

P.P.P.S. having re-read the original post, scratch that. heh.

Link to comment
Share on other sites

You should discern between a headphones mix, which should require little to no processing apart from some gentle volume corrections if using the 'binaural' or 'hrtf' mic placement - and a loudspeaker mix, which might require different mic placement, equalization, stereo-, delay- and other effects to sound good. It's very hard, if not impossible to make it pleasantly sounding for both loudspeaker and headphone playback.

Link to comment
Share on other sites

Are those plugins for Audition? What do they do?

Hello

a) Im not sure whether Audition is a VST host but it is probably able to load directx plugins, and some of those mastering plugins have both directx and vst versions

b ) They more or less do finalizing from noice, hum cracle reduction to normalisation, dynamics and stereo enhancement, reverberation, eq, etc etc.. and some have many useful presets

Byes

Edited by Question
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...