Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's actually a very good question. To some extent, performance analysis tools may be able to give you high-accuracy timestamps of things involving context switches and other things that can cause scheduling jitter. If you can get access to things like the fill level of FIFO buffers, even better. You may also be able to do experiments like cutting buffer sizes down to the bone to see how low they can go without glitching.

Of course, it's not unusual that the many layers of abstraction in modern systems actively frustrate getting real performance data. But dealing with that is part of the requirements of doing real engineering.



For context switch timestamps would I be using something like perf on Linux?

Would buffer level checking entail, eg. checking snd_pcm_avail() when my audio code begins and ends (assuming I'm talking directly to hardware rather than PipeWire)? Dunno if PipeWire has a similar API considering it's a delay-based audio daemon and probably checking snd_pcm_avail when a client requests would slow it down.


Buffer level makes almost no sense.

All that matters is/are:

1. how soon after the device deadline (typically marked by an interrupt) until a kernel thread wakes up to deal with the device? (you have no control over this)

2. what is the additional delay until a user-space application thread wakes up to deal with the required data flow? (you have very little control over this, assuming you've done the obvious and used the correct thread scheduling class and priority)

3. does user space code read & write data before the next device deadline? (you have a lot of control over this)

As noted above, cyclictest is the canonical tool for testing the kernel side of this sort of thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: