Very Newbie Question

Legacy

Logician
Though I'm not new to Logic, I am very much a newbie when it comes to recording anything beyond virtual instruments. (My apologies in advance to George Leger, but everybody's gotta start somewhere.):

I have a very simple setup, with regards to recording vocals. A Shure SM58 is wired directly into my audio interface, an old ULN-2, which goes directly into my computer. There is no hardware mixer or outboard gear of any kind. The headphones come directly out of the audio interface. I've been able to get by up till now, but the headphone mix has been completely without reverb or any other effects, which I then add afterwards on playback. I'd like to learn how to record vocals by monitoring effects, but not recording them. Once I get that under my belt, then I'll need to learn how to record using a compressor or limiter to smooth out level spikes, and still only monitor effects.

I know that this is basic recording 101 for most of you, but for me, engineering is something that I'm being dragged kicking and screaming into. Unlike playing & composing, engineering doesn't come naturally to me. I need step-by-step instructions on how to do this. Can someone assist? Thanks!
 
Not sure why you mention me, but here goes:

If you would like to have reverb on your vocal there are 2 ways to do it: 1, use a track insert. When you look at an audio fader, the top slot is the EQ visualizer. until you double click to add an eq, or insert one later on in your series of effects, it looks like a box with nothing in it. Double click on it and an eq appears.

After that are your inserts. You can add effects like reverbs, chorus, ect. If you add a reverb here, you can monitor without recording it. You use the reverb amount slider to increase or decrease the amount of reverb relative to the vocal.

So, put your mouse over the first insert slot, press and hold down the mouse, and you will see the list of plug-in effects available. Select the reverb list and add a Space Designer. You should be able to hear reverb on your voice.

Try this out, and if you can get it working, I can explain what a buss is and how to setup a send - buss - return situation.
 
Thank you George; I very much appreciate your help. Can't wait to try it. I referred to you earlier because I thought that you might cringe at the thought of answering such a basic question. But as always, I genuinely appreciate all the time you've always taken to help me out.
 
George,

At first, I couldn't get your solution to work, and it's taken this long to figure out why: I had software monitoring disabled. Reenabling it then allowed your solution to work, with one caveat; when the vocalist sings live into her mic, she hears herself doubled in her headphone cue mix. Why would that be? Thanks!
 
That would be because you are monitoring 2 sources: your audio card, and Logic. Try this: look at your audio card app, and see if it has a mixer. Turn down the level that corresponds to your mic input, and monitor only through logic.

Another way to do it is to listen to the mic through the audio card, and turn down the track in Logic. IF you do it that way, remember that 1) You reverb send my be prefader (hold down the send knob to change modes), and that if you are using any EQ or Compression within Logic, you won't be able to hear it.

You have yo pick whichever thing you want... I suggest the audio card mixer be turned down and monitor through Logic (as long as your I/O buffer is small enough... more than 128 and your vocalist might have a hard time recording due to the latency...).
 
George,

Great; turning down the sound card volume achieved the desired effect. I can now record while monitoring reverb, without recording it. Now I'm ready for the next step that you mentioned:
Try this out, and if you can get it working, I can explain what a buss is and how to setup a send - buss - return situation.
Please do. My intention is to learn how to record using either limiting or compression. I understand the difference between the two, but am not sure which I should be using to record a lead vocal. Should I only be concerned with limiting level peaks to avoid distortion, or should I be compressing to raise up spots that are too quiet as well, (or is it smarter to use compression after the vocal is already recorded)? And how do I set it up?

Thank you again sir!
 
George,

This video made the concept of setting up aux/buss for effects simpler for me to understand:

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.


It would appear that a bus is a conduit that sends the signal to an Aux, where the effect is chosen. The amount of level sent from the bus determines the depth of the effect.

What is a send and how is it used?

Thanks!
 
Last edited by a moderator:
We would use a send and aux return when we want to use the same thing on multiple tracks at the same time, reverb for instance.

In the old days (pre-daw) a studio would often have 1 reverb unit (be it a digital one, or maybe a spare room that can be used... Motown records had an attic that was used for all their old songs... and if you work with a real old time recording guy or girl, they call it echo, not reverb). They would send any instruments to the room using the buss on their console, and the level being sent would alter the effect: louder reverb with less direct sound makes a sound move back into the sound field.

No, today, most daws have plenty of built in reverb effects (Logic has 6 I believe), some use less cpu power and others more. There was a time when we had to choose which reverb we would use depending on the computer: more powerful systems could use the Platinum Reverb, slower systems (laptops for example) would use the silver. Today we have reverbs that use a "tone picture" called an impulse response, and take allot of power to create.

Because of the limited availability of power, one would use 1 send ot buss to the aux, where the reverb would be located, allowing them to add reverb to whichever tracks they wanted too, and conserving power. Rather than use 1 reverb on each track (that can use a huge amount of power using an IR with a long decay time) we use 1 to create a cohesive sound in our mix.

I hope that makes sense. It's based on limitations which we really don't have today, especially with things like the UAD systems that use an extra card or 4 to add power to our systems, or multicore CPU's, or cheap outboard boxes (or expensive ones like the Bracasti M7, a stereo reverb that uses 10 of the same shark DSP chips that a UAD card uses to make the soundfield up, and sound unbelievable, or the old Lexicon 960 reverbs, that cost thousand each).

I crack up when I hear people go off about current DAW technology and systems that we have, remembering that 32 years ago I started recording with a Tascam 144 cassette 4 track recorder, and graduated to a Fostex G16 track 1/2 inch tape deck, a 6 ft rack of synths as well as a couple of controllers, 3 reverbs, and a Soundtrax PC midi board, 56 input with midi muting. I can do 10 times that on my Mac Book Pro, and about 20 on my custom build hackintosh.

I have been so lucky... the year I started recording as a pro midi started, as did computers and technology, and I went from 4 track tape to 2 track digital (the Digidesign Soundtools system) to 8 channels of Protools 1, to Logic and my audio interface (I have about 6 of them, so it depends on what I'm doing as to which I use). It's been an incredible journey. I'd suggest you take some time to check out some books on what it was like (Computer Music has a 20th anniversary issue out now that comes with issue 2 as a bonus, and it's so interesting to see where this stuff came from).

Apple, Atari, Commodore, these were the first computers, Tascam, Fostex, Otari, these guys made the first affordable tape decks, Roland, Korg, Ensoniq, Emu, Sequential Circuits, Korg, to name a few, brought us the synths and instruments we needed to do our jobs and record, Smpte Tracks, Opcode Vision, Mark Of The Unicorn, and C-Lab Notator, allowed us to record and edit midi data. Then we for Propellerheads, the creators of the first system to move data from a virtual instrument into a daw, and that changed everything.

I mention these guys because some are gone (Opcode were the real motivators behind standard midi files, Roland, Yamaha, and Sequential were the guys who worked on the Midi spec that makes when we do today possible), and most people don't think that Propellerheads should be honored with some kind of lifetime award for the things they have come up with that transformed our ability to create.

Anyways, sorry for the rant. it's been a fantastic journey, and I love it when people actually give a damn enough to learn about this. The "craft" of audio engineering is actually being lost. Most people have no idea how to mic a drum kit, or setup a session for a band or an orchestra.I just hope that some of the real craft of what we do gets lost to time and technology.

I can tell you this: Recording with Barry Manilow, at Capitol Records here in LA, with Allen Sides recording, and Michael Lloyd producing, with a 40 + piece orchestra live off the floor, was a moment I will never forget as long as I live. And I hope that some of you get to go that far too, the people that dig, ask questions, respectfully be a PITA, but also dig out their own to discover what it is we can really do, who find the magic, and have moments in the studio that become highlights on their lives.

Merry Xmas LUG... I'm happy we're all here, and I'm honored to be part of this unique community. Thanks...
 
Good post George.

I would add that there is also another reason: sharing the same reverb via bus/aux/sends glues the sound together. After all, a bunch of players in a room, concert hall, etc. are in the same ambience. They "hit" it at different angles, amounts, etc. depending on the instruments and where they are, but it still it is the ot he same space.f
 
The above referenced video shows how to Bus to an Aux. but what is a "Send", and how does it differ? Thanks!
 
A "Send" is a virtual knob on a channel strip that routes a user definable portion of that channel strip's signal to another location in your audio stream. It routes it through a virtual pathway called a Bus. And the usual destination of that Bus pathway is an Aux track with it's input set to correspond to the Bus being used by the Send.

Hope this helps.

And yeah, great post George! Happy holidays!:)
 
Back
Top