Concurrency and Coordination Runtime
Commento rimosso su richiesta dell'utente.
cool vid. can't wait for more of these going deep interviews. you're a dev's best friend charles ![]()
bonk wrote:Very interesting video. I don't know if I missed that but is MF a COM API ? If not what style of API is it ?
More Vista Videos !! Please !!
Minh wrote:I'm just curious about how you provide per-app volume control for "legacy" apps that don't talk directly to the WASAPI. Do you create a session for an app on the first request for sound service? How do you differentiate between 2 instances of the same .EXE?
BTW, it's a bit funny that a video about audio has such a low audio level.
GRiNSER wrote:is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?
gaelhatchue wrote:
GRiNSER wrote: is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?
My guess is that you can't do that. Everything gets mixed in the mix buffer, and the result goes to the sound card. This could only be possible with multiple sound cards. Am I right Larry?
GRiNSER wrote:oh thats a pity, so i have to use an USB headset if i wanted to have the music on my speakers and the phonecalls on my headset? why isn't every soundcard channel an endpoint? more granularity would be fine
Tzim wrote:If i understand correctly, the app streams are mixed in user mode by the Windows Audio Service. But what if my soundcard does support hardware mixing ?
Am I loosing those kind of features , or does the service uses thoses hardware capabilities by sending the streams 'unmixed' to the sound card ?
As for FP processing instead of 16/24 bit int... isn't it a waste of CPU cycles to convert to float, just to convert back to int to pass the stream to the sound card (most of them dont support FP samples), when the user uses poor low end speakers. Or maybe you leave the choice not to use FP ?
Does this new api make it easier to do soundstuff from a .net application?
ZippyV wrote:Does this new api make it easier to do soundstuff from a .net application?
Nice!
More vista videos! ![]()
- LB
LarryOsterman wrote:
Close. You can't do that, but you CAN differentiate which device gets which audio output. So while you can't split audio to different channels on a single adapter, if you have a set of USB headphones and a 5.1 surround system, you could redirect the output of your IM application to the USB headphones and the media player to the 5.1 system.
Doing this requires some cooperation from the application unfortunately - it doesn't come for free. Applications that use the existing APIs to find out the preferred voice communications device will work correctly, but apps that just look for the default won't
GRiNSER wrote: oh thats a pity, so i have to use an USB headset if i wanted to have the music on my speakers and the phonecalls on my headset? why isn't every soundcard channel an endpoint? more granularity would be fine
Because people like to listen to audio in stereo, not mono.
An endpoint is an address. If each channel had its own address, you'd not be able to render stereo audio.
An excellent video, really informative to watch, however it does leave some questions unanswered.
What about automatic jack senseing? I can just plug in a mic, and it's detected as a mic, right? same for my other speakers.
What about HD Audio? OK, so we get WaveRT DMA implementations, but what else? It would be nice to hear about some other HD audio tech, like the jack sensing above.
How about DVD Audio Discs? how is that going to be handled? Thats 192Khz, right?
What about multi channel setup?
Midi?
And finally, hardware mixing accelleration?
I appreciate this article was really to give software developers an insight into the various structures of the new windows audio platform, but perhaps in a new article you could cover some of the points above.
Thanks!
Mark
Which beta can we get full access to the new Audio stack (Build 2 I guess, but when)?
Where can we get more information on how to use and program the new API stack ( Ref material , books, white papers, articles etc)?
Can we get more information on creating and adding system effects to the Audio streams.
Examples on sending data direct to the DMA memory buffer with the UAA in mapped to exclusive mode.
When applications are in exclusive mode, do they pass 16 bit PCM to 32 float data into the DMA buffers?
MarkPerris wrote:
An excellent video, really informative to watch, however it does leave some questions unanswered.
What about automatic jack senseing? I can just plug in a mic, and it's detected as a mic, right? same for my other speakers.
MarkPerris wrote:
What about HD Audio? OK, so we get WaveRT DMA implementations, but what else? It would be nice to hear about some other HD audio tech, like the jack sensing above.
MarkPerris wrote:
How about DVD Audio Discs? how is that going to be handled? Thats 192Khz, right?
What about multi channel setup?
MarkPerris wrote:
Midi?
And finally, hardware mixing accelleration?
Sven Groot wrote:I'm a bit confused on this resampling business. It's nice that it can't suddenly decrease the quality, but otherwise?
Resampling is a very tricky business which is very difficult to do right and can make a major difference in sound quality. A good resampling algorithm is also fairly CPU intensive. Most cards can do hardware sampling, although not always equally good; the Audigy series, including the Audigy2 and 4, for instance will always resample to 48kHz regardless of the input mode (except 96 or 192kHz mode of course), and they don't do a spectacular job. One of the benefits of the X-fi is that it has much improved resampling, and more importantly that you can turn it off. And now you want to entrust this difficult process that can have great effect on the quality to app makers around the world? That doesn't sound like a good idea to me.
Furthermore, if I set the mixing quality at 16 bit 48kHz in the control panel in Vista, what happens if i play higher quality content, like a DVD with a DTS 96/24 track? Will it be downsampled to 48/16? Do I need to change the setting to get correct quality results for these higher quality content types, and if so, how will an average user who is not a Windows expert but still wants the most from his audio card know how to do this?
Lastly, how is digital output affected by this? If I tell PowerDVD to use SPDIF and tell the soundcard to do digital passthrough so the DD/DTS signal gets sent directly to an external decoder, I assume this bypasses the whole mixer/resampling business? What if I want to use the decoder on the soundcard?
cairn wrote:Which beta can we get full access to the new Audio stack (Build 2 I guess, but when)?
cairn wrote:
Where can we get more information on how to use and program the new API stack ( Ref material , books, white papers, articles etc)?
cairn wrote:
Can we get more information on creating and adding system effects to the Audio streams.
cairn wrote:
Examples on sending data direct to the DMA memory buffer with the UAA in mapped to exclusive mode.
When applications are in exclusive mode, do they pass 16 bit PCM to 32 float data into the DMA buffers?
LarryOsterman wrote:We had to choose a "reasonable" default, and something like 99.9% of all the audio content out there is sampled at 44.1kHz (since most of it originally came from a CD), so that's what we went with as the default (today - we may change our minds). It's easy and relatively obvious to find the tab that lets you change the default sample rate, however.
cairn wrote:
Can we get more information on creating and adding system effects to the Audio streams.
We are currently 'hacking' KSstream to synthetise actuall sample rate to use in our resmapler to eliminate drift... but acuratly synching multiple live device is still impossible. Example is audio / video sync , matching the right sample from a sound card
mic with the right frame of a webcam or camcorder.
It seem allot of fundamental features are still not implemented in the architecture or are left 'optional' for the driver writer to expose ...
From what I hear and read. I'm not a happy camper.
Stephan
Jedediah wrote:I'm curious what exactly the problems are with a managed audio API. I'm working on a pro audio app in C# and we're planning to make a managed wrapper for ASIO (and I guess for wasapi too). If it's not going to work, I'd like to know now!
blue fire wrote:I watched the video and it sounds cool. I'm working with professional audio devices. I'm actually write the firmware.
i'm not a driver developer so don't shout too loud if this question is not supposed to be put here.
The problem is like this: usually the user might change the number of channels that come and go to the device. For that you have either a USB or 1394 connected device.
Whenever this happens Cubase gets really spooked or dies gracefully.
The problem relies in the fact that wdm was designed for devices that always have the same channel configuration.
The only solution is to disconnect the device and reconnect the device after a certain time. But then you don't get sound anymore. i mean it is suppose to be plug and play...
After the device is reconnected a rediscovery process takes place and it is ok. But you need to restart your application change your setup and so on.... which is really ugly
(Usually musicians don't understand the difference between digital or analogue and they expect that the device behaves like an analog device. Either they have crappy sound if the device is not in synch or they get sound.)
is this taken care in Vista?
Does the application have the possibility to register for a stream format change event?
Maybe I missed it but i don't remember of any mention of this subject.
i believe this is pretty cool feature especially when you have more then one devices on the bus (USB/1394). The problem relies mostly in the fact that you don't want to flood the bus with traffic if it is not necessary.
Cheers,
dacian
LarryOsterman wrote:
Jedediah wrote:I'm curious what exactly the problems are with a managed audio API. I'm working on a pro audio app in C# and we're planning to make a managed wrapper for ASIO (and I guess for wasapi too). If it's not going to work, I'd like to know now!
The big problem with a managed solution is that it's a managed environment.
What does your app plan on doing when GC comes along and blocks access to your audio buffer for 10 milliseconds? What happens when the jitter decides to re-gen your IL code?
If you're trying to do low latency audio, it's critical that all the memory and code involved be locked down in memory so that you don't incur paging hits. But in a managed environment, it is difficult to achieve that goal.
LarryOsterman wrote:
What does your app plan on doing when GC comes along and blocks access to your audio buffer for 10 milliseconds? What happens when the jitter decides to re-gen your IL code?
LarryOsterman wrote:
gaelhatchue wrote:
GRiNSER wrote:
is there an ability in vista to set on which speaker you hear an application? for example i want the media player on the 2 front speakers and the game on the 2 back speakers? if it's possible, is there also a possibility to set the volume for each application on a per speaker base?
Close. You can't do that, but you CAN differentiate which device gets which audio output. So while you can't split audio to different channels on a single adapter, if you have a set of USB headphones and a 5.1 surround system, you could redirect the output of your IM application to the USB headphones and the media player to the 5.1 system.
Doing this requires some cooperation from the application unfortunately - it doesn't come for free. Applications that use the existing APIs to find out the preferred voice communications device will work correctly, but apps that just look for the default won't