Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to work on hearing aids and there's a similar concept in human audio perception with regard to audio latency. If you add latency to some audio (say because of the hearing aid's processing), the perception that the audio is getting worse doesn't increase smoothly with the latency. Instead there are a few thresholds where the latency seems to be perceptibly worse. The main ones are:

0--2 ms: No perception of latency at all.

2--10 ms: There is a comb filtering effect, so the audio sounds somewhat distorted.

10--30 ms: There's a perception that something is "off" about the audio and it requires greater cognitive effort to listen to. This is partly due to a noticeable desynchronization between the audio and visual cues. Another factor is the "Haas effect." If you have a direct path (audio that goes straight into the ear) and audio coming out of the hearing aid at some latency with respect to the direct path, the arrival of these two separate wavefronts at different times causes a perceptible distortion in the audio.

30+ ms: Beyond this people perceive an actual lag between what they see and the audio they hear. It can be almost nauseating to listen to audio at this latency for long periods of time.

The upshot of all this is that if you look at the latencies that hearing aids on the market have, they all cluster into two groups: one at around 2 ms and another at around 10 ms. If you can fit all your processing into 2 ms that's great, but if you go much longer than that, you may as well take the full 10 ms to do even more processing because people aren't going to be able to tell the difference.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: