Logic Pro 8 (I/O Buffer Size/Sample Rate)*2 why?


New Member
Hi everybody!
nice to be here!

well I have a (actually two) general question rather than a general question to logic and hoped, that I could get an answer here!
however, googling around for some deeper understanding on latency I stumbled over this formula
latency=(I/O Buffer Size/Sample Rate)*2

it is stated here

and also on some RME site!

well I can't make sense of what the input- and what the output-buffer should be!
does any audio-stream(well e.g. guitar -> soundcard -> computer -> soundcard) imply those 2 buffers?
what exactly is the input-buffer ment for? and of course the same question for the output-buffer?

an other thing you maybe could give me an understanding of: is that buffer a memory portion in the RAM or in the sound-card?

many thanks!!
I'm by no means a technical expert, but the I/O buffersize is the (adjustable) amount of time the software needs to communicate with your soundcard and to process what has to be processed.
Let's say you set a buffer size of 64 samples. At 44.1kHz we have 44,100 samples per second and the host needs 64 of them on a timeline to do it's processing duties. This equals roughly 1.45 milliseconds.
Now, the *2 thing is simply because, at least in an audio software monitoring scenario, this processing has to take place twice, once at your input, once at your output. So, theoretically, in the abover scenario, the overall latency would be something around 3 ms. In practice, this is not exactly true, as all soundcards add a certain amount of what might be called "safety buffer" (probably so they can do some onboard processing, data passing and what not). This safety buffer usually is higher on cheaper cards, and it's also somewhat higher on FW or USB cards, compared to PCI or PCMCIA/Cardbus solutions. As an example, on my Macbook, the internal soundcard permanently adds some whooping 13ms to the signal, rendering it pretty much useless for software audio monitoring. RME however claims that their PCI cards (and now even their new USB Fireface) would only add 64 samples of said safety buffers.

As you can see, this is also why working at higher samplerates allows for lower overall latencies. 64 samples simply take less time under 88.2 or 96 kHz than under 44.1.

Generally, the smaller the buffer size, the smaller your latency, but sometimes it comes with an additional CPU hit. One of the good things with Logic being that it performs extremely good with low buffer settings (something that can't be said for, say, Cubase). Most people are able to run their Logic projects with 64 samples buffersize all throughout, so it's only necessary to probably switch to a somewhat higher setting in case you really want to squeeze every little ounce of CPU juice out of your machine. On these days fast computers, it hardly ever really matters.

- Sascha
Upvote 0
oh man!!
huge thanks for taking the time, friendlyness and passion delivering me that great explaination!
while I'm aware of the things you mentioned, I was too stupid :brkwl:
(and stressed due to time pressure of delvireing an essay and frustrated by looking around only for that info for 3 days now just to be able to make a clear statement) to see that the answear to my question was implied on that logic paper!
realized this 2 hours ago:
latency, is the sum of:

* AD converter latency
* Input portion of the I/O buffer
* Audio driver latency
* Output portion of the I/O buffer
* DA converter latency
this aplies to my guitar exemple!
2 buffers because, 1 is for collecting ADC data from souncard, and 2 is for the plugin processed data to be sent to my soundcard!

this one:
latency when playing software instruments:
* Audio driver latency
* Output portion of the I/O Buffer
* DA converter latency
no input buffer! u see? I didn't:brkwl:
so in this case the formula doesn't apply! :)
logic, isn't it? :D
only thing left to know is (but this only for my curiosity, and won't include into my essay anymore) if that buffer is located on the soundcard or in RAM!

thx again for taking time!
Upvote 0
When playing software instruments however, there's some additional MIDI latency involved. Especially certain kinds of cheap USB interfaces are rather wellknown to add quite some latency on their own. With good interfaces, the numbers should be around 1-3 ms, with cheaper ones, they could easily go up to, say, 10 ms.
I think that's why there's still quite some people using Unitors and the likes, as these are pretty much known for offering rather little MIDI latencies.

Fwiw, IMO it often helps to translate all these figures to a real world scenario.
In, say, 10 ms, sound travels almost exactly 3 meters, so playing a software amp with an overall latency of 15 ms will feel like standing away around 4-5 meters from your amp. Usually something all us live players are kinda used to, but when you're also used to the pretty much direct "impact" you get in an "old-fashioned" recording situation (read: mic a few centimeters from your amp, goes straight into a console, then straight into your headphone amp), it can be quite irritating, especially when working with headphones.
Being a guitar player as well, my personal tolerance level for most stuff is around 10-15 ms, any higher and it defenitely becomes a little irritating using headphones. For some funky rhythm playing I rather prefer to record things without headphones and set the monitoring volume so low that I can still hear (and feel) the direct attack from the strings.
And while I love software amps and the way how they massively improved over the last 2-3 years (some of the amps in Guitar Rig 3 are fantastic, so ist Overlouds TH1), when I'm done moving houses next month, I will most likely build myself a "dead box" (a small 1x12" cab in an isolation box plus 1-2 mics hooked up permanently) for most of my recording duties.

Fwiw #2, I also recommend to *not* use software amp sims for practising in case it's about accurate timing and such. Even if 10 ms of overall latency isn't all too much, even if you can compensate for it, it'll still make you get used to playing a bit in front of the beat, something which I defenitely don't like (it took me years to "learn" about playing a little laid back).

After all, our brains are extremely sensitive when it comes to time related things, even if we probably aren't exactly aware of it. There's some little tests one can do, such as doubling a take, panning the two versions hard left and right, then delaying one a little but lowering the volume of the un-delayed track. Most likely one will still perceive the un-delayed track as being sort of louder, more "direct" or whatever.
This of course isn't exactly all too important once we deal with one recorded take only, but it goes to show that at least thinking a bit about the various things that could possibly be affected by latency is quite important.

- Sascha
Upvote 0
Hi Sascha!

I'm not sure if you might have misunderstood my question, if so I'm glad you did! :D
anyway you put a lot of heart into this thread that I can't resist having an inspireing conversation.
while I'm playing around with my DAW for 2 years now and know most of the stuff you mentioned, there are some interessting I didn't know!
so is Overlouds TH1
I missed this thingy! must try, must try...
With good interfaces, the numbers should be around 1-3 ms, with cheaper ones, they could easily go up to, say, 10 ms.
could you mention a good and a cheap one?

from what I know(midi specification), on a conventinal midi connection(through midi cable) a single "note-on" event has 2/3 to 1 ms latency(because of bandwidth 32 kbit/s and 2 or 3 bytes per event). on 2 simultaneous "note-on" events there are a 4/3 to 2(actally 5/3)ms.
10 "note-on" pressed at the same time makes one of them arrive with a latency of 7 ms.
I have a m-audio ozone, that I can directly connect via usb, but here I don't know if the midi layer is totally bypassed and if the latency depends only on the usb-tranfer. this could be a great thing to know for sure!!!
I thought on this once again, and maybe those you know to have a bad latency don't bypass the original midi-layer (if it's possible at all) and those that work acceptable do!
would make sense!
(it took me years to "learn" about playing a little laid back).
don't know what you mean by this, but if this means to be late in time,
then I've allways been an expert to this!:D :D
since I'm playing lot of metal, rock stuff a bit of training to able to more precisely might not be wrong! well I'd preffer to play directly trough an amp, but I live in a flat and my amp went dead few years ago, so I didn't bother bying a new one.
when I'm done moving houses next month
you lucky man!! wished I could too!!
There's some little tests one can do, such as doubling a take, panning the two versions hard left and right, then delaying one a little but lowering the volume of the un-delayed track. Most likely one will still perceive the un-delayed track as being sort of louder, more "direct" or whatever.
cool stuff! I'll try for sure...
but I can imagine it could be due to the first perceived transients! I can hear worse on the left side than on the right! might be a problem!:D


ps: since you live in Hannover, have you been here:

I'll definitely make a trip there in summer, and try those metalic cymbals and the hihat!
Upvote 0