1

DPC latency checker says

A device driver cannot process data immediately in its interrupt routine. It has to schedule a Deferred Procedure Call (DPC) which basically is a callback routine that will be called by the operating system as soon as possible. ... There is one DPC queue per CPU available in the system. ... If any DPC runs for an excessive amount of time then other DPCs will be delayed by that amount of time. ... Unfortunately, many existing device drivers do not conform to this advice. Such drivers spend an excessive amount of time in their DPC routines, causing an exceptional large latency for any other driver's DPCs. For a device driver that handles data streams in real-time it is crucial that a DPC scheduled from its interrupt routine is executed before the hardware issues the next interrupt. If the DPC is delayed and runs after the next interrupt occurred, typically a hardware buffer overrun occurs and the flow of data is interrupted. A drop-out occurs.

Stack Overflow answer says:

A DPC is queued into the global DPC queue, and can be run on any processor. So if you really have a long (-running) DPC on one core, the other core is free to process another. So any timing information is really dependent on the count of processors you have and how many things get currently executed concurrently. So on multicore processors these numbers might vary widely.

Generally what I've read is that a fast dual core is better than a slower quad core for audio, since most audio apps aren't optimized to use more than one core.

But in modern computers it sounds like DPC issues are bottleneck for audio production. Does this mean a quad core processor would be better than a dual core? Other free cores could theoretically handle the audio DPCs while one is locked up by a rude Wi-Fi DPC routine. Is the queue shared between cores, and DPCs can be shuffled around to whichever one is free? Or is there one queue per core, allowing for a core to be hijacked? What about virtual cores?

endolith
  • 7,704

2 Answers2

0

Latency in a Deferred Procedure Call (DPC) is caused by a driver taking a long time to do its thing.

Adding more CPUs will not improve the time a poorly written driver takes to do its processing.

Ian Boyd
  • 23,066
-2

Resurrecting an old thread (sorry). Seems to me the promise of parallel computing and multi cores has not been fully delivered for audio work. In the (good?) olden days (as noted above) when dual processors became possible, the promise was that one processor could handle the graphics while another focused on audio. And you solved DPC issues with tweaking and higher audio buffers. And that worked as an efficiency improvement in some DAWs (eg Logic). But now my newish laptop has eight cores (supposedly like 8 separate processors - or 16 with hyperthreading on) and DPC latency issues are just as bad as ever. Even with wifi and most other features switched off, full power and largish buffers (512 samples), there's still the occasional glitch on DJing software (which is not itself taxing the CPU much). DPC latencies of 4,000-10,000! I live in terror of it occurring in performance, hence have been careful to use this machine in non-critical situations only. My desktop dual Xeon audio workstation, on the other hand, has DPC consistently around 500 and is perfect. Maybe the trick really is dual actual CPUs not cores.