Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Vista Audio Stack and API

Download

Right click “Save as…”

Charles recently caught up with seasoned Niner, Larry Osterman, an SDE and 20 year Microsoft veteran, and Elliot H Omiya, a Software Architect and audio guru, to dig into the innerworkings of Vista's updated Audio Stack and new user mode API. Much of the guts of Windows audio have been moved up into the land of the user and this has consequences for both Windows audio developers at the API level and for Windows at the general programmability, reliability and stability levels. Great stuff.

Enjoy

Tags:

Follow the Discussion

  • ZeoZeo Channel 9 :)
    Yea!!!! More Vista Videos!!!!
  • I have no questions; good video. Smiley
  • cool vid. can't wait for more of these going deep interviews. you're a dev's best friend charles Wink

  • bonkbonk Ich bin der ​Wurstfachve​rk√§uferin !
    Very interesting video. I don't know if I missed that but is MF a COM API ? If not what style of API is it ?

    More Vista Videos !! Please !!
  • ChadkChadk excuse me - do you has a flavor?
    Its Larry! He rock.

    We want lots more of this kind.
    A going deep with the network stack in vista, would also be great Cool
  • Jonathan MerriweatherCyonix Me
    yeah charles is king Tongue Out
  • bonk wrote:
    Very interesting video. I don't know if I missed that but is MF a COM API ? If not what style of API is it ?

    More Vista Videos !! Please !!


    MF is a com-ish API, but you don't activate objects with CoCreateInstance.
  • MinhMinh WOOH!  WOOH!
    I'm just curious about how you provide per-app volume control for "legacy" apps that don't talk directly to the WASAPI. Do you create a session for an app on the first request for sound service? How do you differentiate between 2 instances of the same .EXE?

    BTW, it's a bit funny that a video about audio has such a low audio level.
  • GRiNSERGRiNSER GRiNSER puts a smile on your face :)
    is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?
  • Minh wrote:
    I'm just curious about how you provide per-app volume control for "legacy" apps that don't talk directly to the WASAPI. Do you create a session for an app on the first request for sound service? How do you differentiate between 2 instances of the same .EXE?

    BTW, it's a bit funny that a video about audio has such a low audio level.


    First off, with a couple of exceptions (directks, asio, etc) every app that plays audio interacts with wasapi.  You simply can't play audio without it.  So the legacy apps are talking to wasapi, even if they don't know it.

    Having said that, every audio stream is associated with an audio session, it is meaningless to talk about streams without sessions.

    If you don't do anything special, each stream in a process is associated with the same session (the first stream creates the session and all subsequent streams (that don't do anything special) are associated with that session).

    Each session has an identifier that contains "interesting" things about the session.  Among the "interesting" things are the name of endpoint on which the session is activated, the executable name, and the executables process ID.  The last is how you differentiate between sessions with the same executable.

    When we save the volume for an app, we use all the "interesting" things except the process ID.
  • GRiNSER wrote:
    is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?

    My guess is that you can't do that. Everything gets mixed in the mix buffer, and the result goes to the sound card end point. This could only be possible with multiple sound cards. Am I right Larry?

  • gaelhatchue wrote:
    GRiNSER wrote: is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?

    My guess is that you can't do that. Everything gets mixed in the mix buffer, and the result goes to the sound card. This could only be possible with multiple sound cards. Am I right Larry?


    Close.   You can't do that, but you CAN differentiate which device gets which audio output.  So while you can't split audio to different channels on a single adapter, if you have a set of USB headphones and a 5.1 surround system, you could redirect the output of your IM application to the USB headphones and the media player to the 5.1 system.

    Doing this requires some cooperation from the application unfortunately - it doesn't come for free.  Applications that use the existing APIs to find out the preferred voice communications device will work correctly, but apps that just look for the default won't Sad
  • GRiNSERGRiNSER GRiNSER puts a smile on your face :)
    oh thats a pity, so i have to use an USB headset if i wanted to have the music on my speakers and the phonecalls on my headset? why isn't every soundcard channel an endpoint? more granularity would be fine Cool
  • If i understand correctly, the app streams are mixed in user mode by the Windows Audio Service. But what if my soundcard does support hardware mixing ?

    Am I loosing those kind of features , or does the service uses thoses hardware capabilities by sending the streams 'unmixed' to the sound card ?

    As for FP processing instead of 16/24 bit int... isn't it a waste of CPU cycles to convert to float, just to convert back to int to pass the stream to the sound card (most of them dont support FP samples), when the user uses poor low end speakers. Or maybe you leave the choice not to use FP ?
  • GRiNSER wrote:
    oh thats a pity, so i have to use an USB headset if i wanted to have the music on my speakers and the phonecalls on my headset? why isn't every soundcard channel an endpoint? more granularity would be fine


    Because people like to listen to audio in stereo, not mono.

    An endpoint is an address.  If each channel had its own address, you'd not be able to render stereo audio.
  • Tzim wrote:
    If i understand correctly, the app streams are mixed in user mode by the Windows Audio Service. But what if my soundcard does support hardware mixing ?

    Am I loosing those kind of features , or does the service uses thoses hardware capabilities by sending the streams 'unmixed' to the sound card ?

    As for FP processing instead of 16/24 bit int... isn't it a waste of CPU cycles to convert to float, just to convert back to int to pass the stream to the sound card (most of them dont support FP samples), when the user uses poor low end speakers. Or maybe you leave the choice not to use FP ?


    If your sound card supports hardware mixing, it means you can play non PCM audio at the same time the OS is playing PCM audio.  So there's value in that.

    This software mixing thingy isn't new for Vista - it's essentially the same way that XP and Win2K (and Win98) worked.

    As far as the int-float thingy goes, many audio solutions available today support float rendering, and more and more will in the future.

    And the DSP is many orders of magnitude more accurate when working with floating point numbers.
  • ZippyVZippyV Fired Up

    Does this new api make it easier to do soundstuff from a .net application?

  • CharlesCharles Welcome Change
    ZippyV wrote:

    Does this new api make it easier to do soundstuff from a .net application?



    Did you watch the video? Smiley 

    All of this API is targeted at unmanaged C++ developers. I'd imagine the COM stuff could be interoped with in the way you interact with COM today from the managed world.

    C
  • William Staceystaceyw Before C# there was darkness...
    "All of this API is targeted at unmanaged C++ developers. I'd imagine the COM stuff could be interoped with in the way you interact with COM today from the managed world."

    First, thanks Charles and Audio guys!

    I did not follow the "no managed" reasons either - other then understanding time constrants.

    It would seem they could just wrap the wasapi in a Managed wrapper.  I mean c# can still use pointers and unmanaged buffers if you have to.  And it would seem that with audio, you would be handing off most stuff directly to win32 anyway, letting it do its thing async. So why would the CLR even come into play there?  I am surely missing something basic here.

    Even if it gliched, I would rather have some kind of managed support then Zero.  Also, things like enumerating and listing devices (i.e. management) would not seem to have any neg effect on audio perf as your not doing any audio.  Seems a bit strange;  Avalon is managed from ground up.  Vista audio - zero managed.  Don't get me wrong - love the new designs and apis from what you showed.  Fine work.

    P.S. Wonder if this "kind" of design would work as a SIP in Singlarity?  I mean lifting audio shared memory location(s) into the SIP.  I guess that could be inside the kernel component they lift into each SIP.  Interesting stuff. 

    Thanks again!!
    --
    William
  • LaBombaLaBomba Summer

    Nice!

    More vista videos! Smiley

    - LB

  • Thanks for a very informative video. It's great that you have added explicit support for applications that need to directly access the audio hardware without kmixer-like processing. I hope there will be support for specifying the buffering settings and to programmatically control the sampling frequency at which the hardware operates. When directly writing to the soundcard dma buffers the sample format will be that of the hardware, not necessarily 32 bit floats, right?
    Will applications that directly talk to WDM drivers continue to work on Vista?

    Flavio.
  • I have been wondering if MS has been investigating IP speaker/soundcard system that can be detected with uPnP?

    So you can have stand alone speakers for kitchen, bedrooms, etc (ala Apple Airport Express) then be able to name them in windows and redirect audio to one or all of them for a home sound system and then share them with other devices like laptops, media centres, etc
    Would be quite interesting and would not need a big multichannel system that is wired to speakers all over the house.
  • LarryOsterman wrote:

    Close.   You can't do that, but you CAN differentiate which device gets which audio output.  So while you can't split audio to different channels on a single adapter, if you have a set of USB headphones and a 5.1 surround system, you could redirect the output of your IM application to the USB headphones and the media player to the 5.1 system.

    Doing this requires some cooperation from the application unfortunately - it doesn't come for free.  Applications that use the existing APIs to find out the preferred voice communications device will work correctly, but apps that just look for the default won't

    GRiNSER wrote: oh thats a pity, so i have to use an USB headset if i wanted to have the music on my speakers and the phonecalls on my headset? why isn't every soundcard channel an endpoint? more granularity would be fine


    Because people like to listen to audio in stereo, not mono.

    An endpoint is an address.  If each channel had its own address, you'd not be able to render stereo audio.


    Last time I checked most audio cards had plenty of stereo outputs, for front, rear, side etc. Are you saying that for every "user endpoint" one should have one "soundcard" so to speak? Many motherboards come with onboard audio and 3-4 stereo output pairs and if I want to connect 3 user stereo endpoints (pair of speakers), I have to get 3 soundcards? There are multiple network adapters in some motherboards and pci cards, their channels are not "bundled" by default, why should this be different for audio? If the bundled mode (7.1) is default, fine, but I may want to use pair of loudspeakers and headphones and that still leaves one or more stereo endpoints free in the card. If you can mix in eight distinct streams of data, you ought to be able to mix four pairs of stereo too?

    All in all, great improvements but I guess the audio hardware vendors need to have separate endpoint for every stereo output pair in the soundcard, otherwise from the sound of it the system is not flexible enough. All that is needed is changing the drivers right?
  • I have two questions for Larry:

    The first is related to the business impact of audio and somewhat related to mixing: can you talk to Bill G. or Steve and have Microsoft buy this company? Smiley

    http://www.holosonics.com/

    If the processing equipment could be done either through a driver or through a sound card and if the price point could drop to the $100-200/node range, this would be awesome for businesses for things like conference calls (no feedback!), IP telephony, LiveMeeting/Webex, online training, etc.  I've seen issues with users that want to watch a training video online, but because of their open cubicle position, it is too disruptive to other users.  They can't use headphones, because they need to be able to hear their phone and respond to other users walking by, plus it looks somewhat "unprofessional." 

    One other question--as a hobby, I do some composing work, and sometimes I do this remotely via RDP.  Are there going to be changes in how audio is processed over RDP?  My current problem is that MIDI playback apparently doesn't transmit over RDP.  I use a program called Sibelius, and playback unfortunately does not work remotely.
  • An excellent video, really informative to watch, however it does leave some questions unanswered.

    What about automatic jack senseing? I can just plug in a mic, and it's detected as a mic, right? same for my other speakers.

    What about HD Audio? OK, so we get WaveRT DMA implementations, but what else? It would be nice to hear about some other HD audio tech, like the jack sensing above.

    How about DVD Audio Discs? how is that going to be handled? Thats 192Khz, right?

    What about multi channel setup?

    Midi?

    And finally, hardware mixing accelleration?

    I appreciate this article was really to give software developers an insight into the various structures of the new windows audio platform, but perhaps in a new article you could cover some of the points above.

    Thanks!


    Mark

  • Sven GrootSven Groot Don't worry... I'm a doctor.
    I'm a bit confused on this resampling business. It's nice that it can't suddenly decrease the quality, but otherwise?

    Resampling is a very tricky business which is very difficult to do right and can make a major difference in sound quality. A good resampling algorithm is also fairly CPU intensive. Most cards can do hardware sampling, although not always equally good; the Audigy series, including the Audigy2 and 4, for instance will always resample to 48kHz regardless of the input mode (except 96 or 192kHz mode of course), and they don't do a spectacular job. One of the benefits of the X-fi is that it has much improved resampling, and more importantly that you can turn it off. And now you want to entrust this difficult process that can have great effect on the quality to app makers around the world? That doesn't sound like a good idea to me.

    Furthermore, if I set the mixing quality at 16 bit 48kHz in the control panel in Vista, what happens if i play higher quality content, like a DVD with a DTS 96/24 track? Will it be downsampled to 48/16? Do I need to change the setting to get correct quality results for these higher quality content types, and if so, how will an average user who is not a Windows expert but still wants the most from his audio card know how to do this?

    Lastly, how is digital output affected by this? If I tell PowerDVD to use SPDIF and tell the soundcard to do digital passthrough so the DD/DTS signal gets sent directly to an external decoder, I assume this bypasses the whole mixer/resampling business? What if I want to use the decoder on the soundcard?
  • Which beta can we get full access to the new Audio stack (Build 2 I guess, but when)?

    Where can we get more information on how to use and program the new API stack ( Ref material , books, white papers, articles etc)?

    Can we get more information on creating and adding system effects to the Audio streams.

    Examples on sending data direct to the DMA memory buffer with the UAA in mapped to exclusive mode.

    When applications are in exclusive mode, do they pass 16 bit PCM to 32 float data into the DMA buffers?

  • Christian Liensbergerlittleguru <3 Seattle
    What is "PlaySound"? Never heard of it...
  • Sven GrootSven Groot Don't worry... I'm a doctor.
    littleguru wrote:
    What is "PlaySound"? Never heard of it...

    I assume he means this.
  • MarkPerris wrote:

    An excellent video, really informative to watch, however it does leave some questions unanswered.

    What about automatic jack senseing? I can just plug in a mic, and it's detected as a mic, right? same for my other speakers.


    If your audio solution is plumbed correctly, we'll detect when something is plugged into the jack and indicate it.  It's harder to differentiate speakers from microphones electrically (or so I've been told).

    MarkPerris wrote:

    What about HD Audio? OK, so we get WaveRT DMA implementations, but what else? It would be nice to hear about some other HD audio tech, like the jack sensing above.



    It's in there Smiley

    MarkPerris wrote:

    How about DVD Audio Discs? how is that going to be handled? Thats 192Khz, right?

    What about multi channel setup?


    We're doing a fair amount to ensure that multi-channel scenarios work perfectly out-of-the-box.  I don't know about DVD Audio but you should be able to play them back without resampling.
    MarkPerris wrote:


    Midi?

    And finally, hardware mixing accelleration?

    We're not doing a huge amount with MIDI in Vista, and hardware mixing?  We're not taking advantage of it, because it wouldn't help.  Mixing audio streams is essentially done by adding the various samples, but since we need to do post-mix processing of the samples (for software volume, software metering, and IHV supplied global audio effects), the hardware mixer is unlikely to be any faster than the software mixer. 

    Having said that, hardware mixers DO come into play if you want to mix our PCM streams with other streams being decoded like the hardware (for instance, if the hardware supports a separate AC3 decoding pin).

  • Sven Groot wrote:
    I'm a bit confused on this resampling business. It's nice that it can't suddenly decrease the quality, but otherwise?

    Resampling is a very tricky business which is very difficult to do right and can make a major difference in sound quality. A good resampling algorithm is also fairly CPU intensive. Most cards can do hardware sampling, although not always equally good; the Audigy series, including the Audigy2 and 4, for instance will always resample to 48kHz regardless of the input mode (except 96 or 192kHz mode of course), and they don't do a spectacular job. One of the benefits of the X-fi is that it has much improved resampling, and more importantly that you can turn it off. And now you want to entrust this difficult process that can have great effect on the quality to app makers around the world? That doesn't sound like a good idea to me.

    Furthermore, if I set the mixing quality at 16 bit 48kHz in the control panel in Vista, what happens if i play higher quality content, like a DVD with a DTS 96/24 track? Will it be downsampled to 48/16? Do I need to change the setting to get correct quality results for these higher quality content types, and if so, how will an average user who is not a Windows expert but still wants the most from his audio card know how to do this?

    Lastly, how is digital output affected by this? If I tell PowerDVD to use SPDIF and tell the soundcard to do digital passthrough so the DD/DTS signal gets sent directly to an external decoder, I assume this bypasses the whole mixer/resampling business? What if I want to use the decoder on the soundcard?


    Yes, it'll be downsampled Sad  For content that's authored at a higher bitrate than 44.1 kHz, you'll need to change the default sample rate for the endpoint to avoid resampling artifacts.

    We had to choose a "reasonable" default, and something like 99.9% of all the audio content out there is sampled at 44.1kHz (since most of it originally came from a CD), so that's what we went with as the default (today - we may change our minds).  It's easy and relatively obvious to find the tab that lets you change the default sample rate, however.

    Also, we know SRC is hard, in Vista we've actually got a new sample rate converter that is orders of magnitude better than anything previously deployed in Windows.
  • cairn wrote:
    Which beta can we get full access to the new Audio stack (Build 2 I guess, but when)?

    Every external Vista build has had the new audio stack.
    cairn wrote:

    Where can we get more information on how to use and program the new API stack ( Ref material , books, white papers, articles etc)?

    That'll be available in the Beta2 timeframe.
    cairn wrote:


    Can we get more information on creating and adding system effects to the Audio streams.

    If you're an IHV, sure.  Contact your Microsoft TAM.
    cairn wrote:


    Examples on sending data direct to the DMA memory buffer with the UAA in mapped to exclusive mode.

    When applications are in exclusive mode, do they pass 16 bit PCM to 32 float data into the DMA buffers?

    In all cases, you pass whatever the device/mix format is to the audio engine - for both exclusive mode and shared mode (device for exclusive, mix for shared - often they're the same, but not always).

  • Sven GrootSven Groot Don't worry... I'm a doctor.
    LarryOsterman wrote:
    We had to choose a "reasonable" default, and something like 99.9% of all the audio content out there is sampled at 44.1kHz (since most of it originally came from a CD), so that's what we went with as the default (today - we may change our minds).  It's easy and relatively obvious to find the tab that lets you change the default sample rate, however.

    Iirc in 5231 the default is 16 bit 48kHz, at least on my system (Audigy2). On the Audigy2 it'd also be madness to pick anything else as the default,  since like I said the Audigy cards resample to 48kHz themselves so using a different frequency in Windows could result in resampling twice! The upshot is that if your resampling algorithm is really that good you could get quite a nice improvement in quality on the Audigy2 since like I said, the Audigy2's own resampler isn't all that good.

    Which reminds me: can a driver installation adjust this setting to fit the card?

    Also, you didn't answer my questions on digital output. Smiley
  • OH!I'm interested in the vista.
    I hope the ability of the media play 10 will be more reliable.
  • "LarryOsterman wrote
    cairn wrote:

    Can we get more information on creating and adding system effects to the Audio streams.

    If you're an IHV, sure.  Contact your Microsoft TAM.
    .

    But surely you don't need to be a hardware vendor to write stream effects within the Audio engine ( the vertical boxes on top of the audio engine.), or have I missed the point that this is contained on hardware. During ther video it was disscussed that the streams can have effects added to them at this point in the audio process, or did it mean in a derived class of IRenderClient


  • Very interesting video! But isn't the volume a little low? How ironic.

    Anyway, Larry - as usual - rocks!

  • In the begining of the video they say that there are very few programmers around the world that can program in kernal mode

    lower level audio is now in user mode instead of kernal mode

    where can i read more about the kernal mode and user mode that they are talking about?
    thanks

    and with this new sound API does this mean that I wont need to install lets say my audigy sound card drivers?
  • Just a bit of history ...

    The noise made when stepping the "volume" too fast is called
    "zipper noise" dating from VoltageControlledAmp days.

    Why 16bit in Win3.x ?
    16bit/44.1k was the resolution/sample-rate used by the Sony F1/501
    codecs that recorded on U-matic tape and were then transferred to CD.
    Later, the infamous Sony 3324 multitrack perpetuated this evil <g>

    Thanks for a great series of interviews,
    pjc

  • We are currently 'hacking' KSstream to synthetise actuall sample rate to use in our resmapler to eliminate drift... but acuratly synching multiple live device is still impossible. Example is audio / video sync , matching the right sample from a sound card mic with the right frame of a webcam or camcorder.

    It seem allot of fundamental features are still not implemented in the architecture or are left 'optional' for the driver writer to expose ...

    From what I hear and read. I'm not a happy camper.

    Stephan

     

  • CCoder32CCoder32 Who you lookin' at?
    Awesome!  More Videos please!!

  • I'm curious what exactly the problems are with a managed audio API. I'm working on a pro audio app in C# and we're planning to make a managed wrapper for ASIO (and I guess for wasapi too). If it's not going to work, I'd like to know now!
  • I watched the video and it sounds cool. I'm working with professional audio devices. I'm actually write the firmware.
    i'm not a driver developer so don't shout too loud if this question is not supposed to be put here.Smiley
    The problem is like this: usually the user might change the number of channels that come and go to the device. For that you have either a USB or 1394 connected device.
    Whenever this happens Cubase gets really spooked or dies gracefully.
    The problem relies in the fact that wdm was designed for devices that always have the same channel configuration.
    The only solution is to disconnect the device and reconnect the device after a certain time. But then you don't get sound anymore. i mean it is suppose to be plug and play...
    After the device is reconnected a rediscovery process takes place and it is ok. But you need to restart your application change your setup and so on.... which is really ugly
    (Usually musicians don't understand the difference between digital or analogue and they expect that the device behaves like an analog device. Either they have crappy sound if the device is not in synch or they get sound.)
    is this taken care in Vista?
    Does the application have the possibility to register for a stream format change event?
    Maybe I missed it but i don't remember of any mention of this subject.
    i believe this is pretty cool feature especially when you have more then one devices on the bus (USB/1394). The problem relies mostly in the fact that you don't want to flood the bus with traffic if it is not necessary.
    Cheers,
    dacian
  • Jedediah wrote:
    I'm curious what exactly the problems are with a managed audio API. I'm working on a pro audio app in C# and we're planning to make a managed wrapper for ASIO (and I guess for wasapi too). If it's not going to work, I'd like to know now!


    The big problem with a managed solution is that it's a managed environment.

    What does your app plan on doing when GC comes along and blocks access to your audio buffer for 10 milliseconds?  What happens when the jitter decides to re-gen your IL code?

    If you're trying to do low latency audio, it's critical that all the memory and code involved be locked down in memory so that you don't incur paging hits.  But in a managed environment, it is difficult to achieve that goal.
  • blue fire wrote:
    I watched the video and it sounds cool. I'm working with professional audio devices. I'm actually write the firmware.
    i'm not a driver developer so don't shout too loud if this question is not supposed to be put here.
    The problem is like this: usually the user might change the number of channels that come and go to the device. For that you have either a USB or 1394 connected device.
    Whenever this happens Cubase gets really spooked or dies gracefully.
    The problem relies in the fact that wdm was designed for devices that always have the same channel configuration.
    The only solution is to disconnect the device and reconnect the device after a certain time. But then you don't get sound anymore. i mean it is suppose to be plug and play...
    After the device is reconnected a rediscovery process takes place and it is ok. But you need to restart your application change your setup and so on.... which is really ugly
    (Usually musicians don't understand the difference between digital or analogue and they expect that the device behaves like an analog device. Either they have crappy sound if the device is not in synch or they get sound.)
    is this taken care in Vista?
    Does the application have the possibility to register for a stream format change event?
    Maybe I missed it but i don't remember of any mention of this subject.
    i believe this is pretty cool feature especially when you have more then one devices on the bus (USB/1394). The problem relies mostly in the fact that you don't want to flood the bus with traffic if it is not necessary.
    Cheers,
    dacian


    Apps can register for stream format events or they'll receive a distinguished error when the stream format changes.
  • LarryOsterman wrote:
    Jedediah wrote:I'm curious what exactly the problems are with a managed audio API. I'm working on a pro audio app in C# and we're planning to make a managed wrapper for ASIO (and I guess for wasapi too). If it's not going to work, I'd like to know now!


    The big problem with a managed solution is that it's a managed environment.

    What does your app plan on doing when GC comes along and blocks access to your audio buffer for 10 milliseconds?  What happens when the jitter decides to re-gen your IL code?

    If you're trying to do low latency audio, it's critical that all the memory and code involved be locked down in memory so that you don't incur paging hits.  But in a managed environment, it is difficult to achieve that goal.


    Thanks for your quick reply Larry!

    Our audio pipeline is going to have a low latency part and a high latency part, seperated by a buffer. The low latency part will be driven by asio callbacks and is where the VST plugins go. The high latency part renders the document and will probably be buffered around 300ms-1000ms.

    Obviously we are going to need some native code to talk to asio but the question is, how much? If the low-latency code is native, is that enough? Does the high-latency buffer have to be on the native heap as well? The asio callbacks run in a high-priority thread which I assume will preempt the GC and JIT threads, right?

    I'm an old school C++ hacker and I'm not afraid of a little interop, but I am afraid of hitting a brick wall.
  • I'm really curious about the benefits  of the new RT driver for professional audio apps.  Specifically in the video at 27:06 it says that the buffer release is nop.  wow, user mode talking directly to hardware?  How is this possible?  is there a kernel mode switch under the covers?  I'm curious if this new feature will be faster than ASIO for audio apps. 
  • kv331 audiokv331 audio SynthMaster: Available in two flavors: Free & $99
    LarryOsterman wrote:


    What does your app plan on doing when GC comes along and blocks access to your audio buffer for 10 milliseconds?  What happens when the jitter decides to re-gen your IL code?



    I agree with Larry. When it comes to  "real-time/low latency" processing, forget about a managed environment. Just do it the old C++ way, dont be lazy Smiley

    Right now, all the major sequencer apps (Protools, Cubase, etc...) are written in C++, and I don't think any of those folk have any plans to move to a managed environment.
  • It looks like the new Vista Audio is much better but I got the beta2 of Windows Vista build 5342 and the Audio does not work. I have a VIA AC'97 Video Card and Updated the driver them installed a vista audio driver and it still does not work, I can get the "sound guy" to open in safe mode but says there are no divices. in normal boot up it errors every time I try to play a sound (Media player 11 says somthing like there was a problem not related to the player.)

    If you can offer ANY help please e-mail me at socalhazard@gmail.com

    Thanks
    Bullfrog

    PS, How do I got on as Admin in normal start up (the registry show_admin.reg did not work)
  • Ok, so I'm a developer and I want to figure out how to change the speaker settings programatically.

    For example, I'm sure I'm not the only person with a analog 5.1 speaker system that I plug headphones into... they have a hack right on the right front speaker to do this.  But of course the head phones are 2 channel and I don't hear the 5.1.

    So I want to create an application to rapidly switch the main speakers from 5.1 to 2 speaker settings.

    What API do I need to research?  Should I be trying to interface with the control panel or going at the Audio API.

    Thank you.
  • LarryOsterman wrote:
    gaelhatchue wrote:
    GRiNSER wrote:
    is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?


    Close.   You can't do that, but you CAN differentiate which device gets which audio output.  So while you can't split audio to different channels on a single adapter, if you have a set of USB headphones and a 5.1 surround system, you could redirect the output of your IM application to the USB headphones and the media player to the 5.1 system.

    Doing this requires some cooperation from the application unfortunately - it doesn't come for free.  Applications that use the existing APIs to find out the preferred voice communications device will work correctly, but apps that just look for the default won't


    Alwyn: I have searched high and low on the internet for an answer, and nobody can help me. Finally I found a group of people with knowlege on this subject! My need is similar to Rinsers. However, I only want to differentiate between Audio and Voice playback.

    I have a set of external speakers, and an USB headset (LifeChat lx-300). I play with Microsoft Flight Simulator (FSX) a lot. What I want is to send the engine sound (audio) to the external speakers, and the Air Traffic Control voice to the headset. You could set it up like that in XP, but not in Vista. Any ideas?
  • And still no sign of C# wrappers... hmm...

    There are bits and pieces around various websites, but they all pertain to volume control, nothing on audio capture.

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.