Jedediah wrote:I'm curious what exactly the problems are with a managed audio API. I'm working on a pro audio app in C# and we're planning to make a managed wrapper for ASIO (and I guess for wasapi too). If it's not going to work, I'd like to know now!
The big problem with a managed solution is that it's a managed environment.
What does your app plan on doing when GC comes along and blocks access to your audio buffer for 10 milliseconds? What happens when the jitter decides to re-gen your IL code?
If you're trying to do low latency audio, it's critical that all the memory and code involved be locked down in memory so that you don't incur paging hits. But in a managed environment, it is difficult to achieve that goal.
Thanks for your quick reply Larry!
Our audio pipeline is going to have a low latency part and a high latency part, seperated by a buffer. The low latency part will be driven by asio callbacks and is where the VST plugins go. The high latency part renders the document and will probably be buffered around 300ms-1000ms.
Obviously we are going to need some native code to talk to asio but the question is, how much? If the low-latency code is native, is that enough? Does the high-latency buffer have to be on the native heap as well? The asio callbacks run in a high-priority thread which I assume will preempt the GC and JIT threads, right?
I'm an old school C++ hacker and I'm not afraid of a little interop, but I am afraid of hitting a brick wall.