1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Logic 9 Logic Recording Delay Offset

Discussion in 'Logic 9' started by Pete Thomas, Oct 12, 2011.

  1. Pete Thomas

    Pete Thomas Administrator Staff Member

    This is just a reminder for those who already know about recording delay offsets, and a warning for those who don't.

    A while back I did an test and found I needed an delay (offset) of -72 (Apogee Ensemble + macpro)

    Three weeks ago I got a new iMac and migrated everything, but it never occurred to me to recheck the offset until just now, luckily I did this before an important session and found that it was now different and was about -8.

    OK, that's only 64 samples and you might not notice without a test, but I assume this can make a difference in tightness.

    So it's worth doing a test whenever you change anything on your system, and there is no harm in doing one even if not. I remember when I first got the Ensemble the offset changed with every new driver release.

    More info in Orren's article:


    My own test is quite rudimentary, I create an audio track from software instrument click track (via bounce in place) then hang my headphones next to the mic and record it. Then zoom in with the Sample editor and compare the two recordings, which should be identical, ideally so close they are phasing.
    Last edited: Jun 24, 2015
  3. Doug Zangar

    Doug Zangar Senior member

  4. Peter Ostry

    Peter Ostry Administrator Staff Member

    The latency can get much greater in voice-after-voice recording, depending on which voice you concentrate while recording a new one. For example, if you play one voice along the playback and then add another voice while you concentrate on the first take for some reason, you get 128 samples relative to the playback. Finally you may add a couple of overdubs for the latest tracks you recorded and you can end up with a lot more of latency. The bad thing is that you don't notice that while you record. Even if your playing is very tight – slowly, very slowly, the tightness gets out of control with each recording.

    Delays in the recording- or monitoring path are another game, but if you cannot rely on the DAW/Interface/DAW accuracy, you risk a kind of "floating tightness" for simple technical reasons. This is not necessary, therefore the latency should be zero.

    Although this is a logical approach, it is not the correct method to measure the latency. Apart from a monitoring delay which some people may have in their monitoring path, what means "hang my headphones next to the mic"? Maybe exactly at the distance that gives you 64 samples delay? :)
    An easy and 100% accurate method is to connect an interface output to an interface input by a cable. Take a very sharp sound or draw a spike in the editor. Measure the distance from the beginning of the region to the spike and write it down on yellow or pink paper. Then play the region and record it on another track. In the editor, measure the distance from the beginning of the recorded region to the recorded spike and adjust the recording delay of Logic. Repeat the process until the distances in the original and in the recorded region are equal.
    If you have a setup that works with analog and digital in/outs, be aware that the latency is not necessarily the same for both. Better check it. If you find different latencies, Logic cannot help you. Either settle in the middle of both or change the recording delay according to your recording situation.

    For those who are interested: There is an old but still very interesting article about recording latency in John Pitcairn's website: http://www.opuslocus.com/logic/record_offset.php
  5. Pete Thomas

    Pete Thomas Administrator Staff Member

    Yes, I have used the second method as well, but with headphones I stick the mic at more or less the same position as my ear, very very close.

    It seems logical because it's analogous to what actually happens, ie you monitor the music in your headphones and play/sing into a microphone.

    However I will experiment with both methods later, and see which one is more accurate (if there is any difference)
  6. Pete Thomas

    Pete Thomas Administrator Staff Member

    Results of test (using Metric Halo MIO 2882, Logic 9.1.5, iMac 3.4 GHz i7)

    Cabling output to input = 10 samples late

    Headphones right on on mic (as close as ears would be) = 16 samples late

    Headphones six inches away = 34 samples late.
  7. Peter Ostry

    Peter Ostry Administrator Staff Member

    I guess your mic was about 15 mm away from the headphone's diaphragm. If my calculation is right, 15 mm recording distance should add 2 samples:
    Sound speed: 343 m = 1 second
    Recording distance: 15 mm = 0.044 ms

    1 ms = 44.1 samples (at a sampling rate of 44.1 kHz)
    0.044 ms = 15 mm
    44.1/1*0.044 = 1.9 samples
    But you got 6 samples difference, 4 samples more than expected. A digital convertion in your monitoring or recording path would explain the difference of 4 samples. But if there isn't such a thing, where do the 4 additional samples come from?

    However ...

    ... I change my mind and believe that the headphone method is better than the cable because – as you said – it represents what really happens. It delivers the real latency for any monitoring and recording path.
  8. Markdvc

    Markdvc Administrator Staff Member

    I could quite likely be spouting nonsense here, but could the headphone transducers themselves cause a slight latency? 4 samples@44100 equates to about 1/10000 sec. Could this be a ballpark figure for the amount of time elapsing between the electrical signal reaching the headphone components, and them actually converting this signal into sound?

    Just speculating - dangerous at this time of day before coffee ;)

    kind regards

  9. Pete Thomas

    Pete Thomas Administrator Staff Member

    I wonder at what actual number of ms anything actually matters, or becomes "unrealistic"

    Look at a couple of live situationa:

    (A) Playing in a band and trying to play in time with a drummer who is maybe 30 feet away on a large stage, as well as there being an onstage monitoring system and reflections of the drums from the back of the room.

    Theoretically the on stage monitor signal is going to be closest to the drummer's actual beat, but from what I remember of my does doing live work, you could never rely on those wedges, but more often than not all that you had in them was the singer, and a bit of yourself if you were lucky.

    (B) playing in orchestra watching the beat from a conductor. Enough said.

    Both of those situations are probably way more milliseconds than the kind of recording offset we are discussing here anyway.

    Plus, the headphone on the mic test might be more realistic, but not if you are overdubbing an instrument in the control room. Instead of teeny headphone transducers causing 4 ms as mark surmises, there are dirty great big monitor transducers plus a large distance between you and the speakers. (Try recording in the control room at Real World)
  10. Markdvc

    Markdvc Administrator Staff Member

    Of course - as a drummer in a Big Band I'm used to dealing with less than adequate on stage acoustics. One club we play in has a huge reflection off the rear wall. As you know, trumpets can be deafening for anyone located in front of them, even at some distance away, but from the side, room acoustics can make their sound muddy and diffuse. In the case I mention, the strongly delayed signal coming off the rear wall, some 40 - 50 feet away, is subjectively almost as loud as what I hear directly from the horns beside me on stage.

    But we do learn to deal with those sort of problems, the brain seems to be wonderful at filtering signals, and at compensating for delays caused by longer distances, or, as you say, conductors' timing having to be interpreted.

    While delays caused by converters, software etc. may be tiny in comparison, it should be borne in mind that these are system based, (and through overdubbing, can become cumulative and thus significant), as opposed to, for want of a better word, "musical" delays. I think it is important to seperate the two.

    kind regards

  11. Peter Ostry

    Peter Ostry Administrator Staff Member

    They do, I see what you mean. Solenoids in the phones bend a metal diaphragm which moves the air. 15 mm later the air bends the microphone's diaphragm. We get an electrial/mechanical and a mechanical/electrical conversion that were not included in the calculation above. I doubt that the masses of the diaphragms and the "reaction time" of the air cause a delay in the order of samples but they must add a little latency. We cannot explore this microcosmos with a DAW, we would need to go to the lab.

    Very true and that is why I find Pete's headphone measurement better for the common headphone monitoring situation. Normally we say, "get your technical adjustments correct and deal with the other things separately" but in this case we have no chance to correct something somewhere else. Analog driven headphones and an analog recording path are the shortest and fastest signal flow we can construct.

    Well, for best accuracy we should split the recording signal for monitoring hence avoiding the interface. I play best with a splitted signal while others don't care about 40 ms. I don't even feel that, I just hear it afterwards. With a (timing-wise) lousy player like me you have no other chance than either split the recording signal or let it run as it goes and give me a negative delay afterwards. Although, the latter is not optimal because mentally not locked to the time I don't only delay but also float a bit.

    Whatever we do, the target is to write the recorded signal to the correct position in the track. I think everything should be done to reach this target even if the method looks strange. When my studio is up and running again I will certainly try Pete's method. Would not be the first time that something in the opposite direction than the technical brain suggests works better.

    Wouldn't it be good if Logic had a default but bigger recording latency and let us adjust this delay for each track, positive and negative? Those who don't care could simply ignore it, for others it would be a valuable tool for finetuning. But whom do I tell that, John Pitcairn requested it many years ago and just a few people listened (http://www.opuslocus.com/logic/record_offset.php – scroll fully down to the paragraph "A better suggestion").

    Unrealistic or not depends on the music and on the musical and emotional perception of the listener. Playing out of time can add an attractive feeling if it fits in the context. But what to do with the normal looseness, what is artistic, what is accepted and what is wrong?

    Our brain is used to a certain delay in a particular situation and a different delay introduced by technical equipment irritates us. Therefore anything above 0 ms matters. But this is only the technical part. The trick is to imitate what a musician would hear under natural circumstances. This is not trivial and not taught in any school.

    Following Mark's suggestion we can talk about technical and musical delays. The musical delay is natural and any musician can handle it. Unnatural reflections and technic-based delays disturb the feeling. A drummer 30 feet away feels quite normal, he has a natural delay. But the reflection from his back wall is not natural, this is a technical delay. You are lucky if the wall is at the right distance to produce a natural reverb. Otherwise you get all kinds of weird soundwave modulations.

    Such modulations are often accepted and part of a live performance. But in recording, if we shift the time by latency, the problem gets worse. A recording musician does not automatically imagine a huge stage. With the phones directly at his ears he notices just the time difference without any reason for it. Maybe he adapts to that, maybe not.

    Has this to do with latency? The conductor tries to give all players the same feeling. He stands in a position where the music sounds pretty bad but it is the only position where he can hear all of them with their natural delay. Ok, in this regard it has to do with latency. But only for the conductor. For the musicians he is not an optical metronome but tries to move the hearts and breath of all of them to let them sound like one big instrument.

Share This Page