We've had some thoughts in that area too. But I'll admit that i don't know enough about CE drivers, so i don't know if they're enough like Windows drivers to really benefit from WDF. What do you folks think?
I don't like to provide comparison numbers because they're sort of silly. It is a common question, but it's like asking whether a Porche or a diesel jetta is faster. Kernel mode will always be faster if you're focussing on extreme performance. However the question should be will UMDF be able to meet your requirements.
That's also a question which can't be trivially answered since it all depends on your project - how many requests you need to send a second, how much data you need to transfer a second with those requests, what your required latency is, how much CPU and memory you need left over afterwards, etc...
If you look at our WinHEC 2006 slides there's a slide on performance which has our "public" numbers.
My suggestion with any new technology would be to do some simple prototyping to see if it can meet your requirements. Take the skeleton sample, add a queue and some dispatch routines and mock up the interface your driver would expose to an app. Then write a mock app that drives the device the way you think a typical application would, and measure the result. If you can push the amount of data you need down, or if you can get notifications back within your latency requirements, and there's enough CPU left to do real work then UMDF is a good fit. Otherwise you'll need to consider a kernel solution.
I'm sorry that there's not an easy answer.
On the second question - no. UMDF drivers run at normal priority. Note that your kernel mode drivers typically do as well (ignoring interrupt handling, but that's not really a "thread priority" thing). We've considered looking at boosting the threads to see if we can decrease latnecy, but we also want to be sure that an errant driver can't starve the system for CPU time.
Personally i think it's a little better than a kernel-driver. For a KM driver you never know what priority you'll be running at since you're typically invoked in some arbitrary client thread. So it's possible to end up with interesting priority inversion cases when you're acquiring locks.
You're correct that a user-to-kernel transition has to occur in both cases. The difficulty with UMDF is that it then requires a kernel-to-user transition (to get the message up into the host process) which involves the scheduler. Sending a request into a user-mode driver will always be slower than sending it to a kernel mode driver.
However there are a number of things which make this more palatable. First many devices are used sporadically and don't require huge amounts of throughput. Take your typical input device (keyboard, mouse, joystick, etc...) - latency is a concern but throughput definately isn't.
There are also techniques we can use to reduce the number of or cost of of context switches required for any given I/O that would help higher-throughput devices. Batching of messages might be possible - though that's tricky when you don't know what the dependencies between the messages might be. Or an improved IPC mechanism could donate the remaining time slice from the sending thread to the receiving thread (just a random thought) to reduce the time introduced by thread scheduling.
Finally the cost should continue to shrink relative to the computing power of the average PC. You'll still have to pay for what you're getting (for the customer, isolation. For the developer, ease of development & customers who like isolation) but the machine will have many more resources to pay with.
Currently UMDF requires that you write native code. I have hopes that we'll be able to enable managed in the future, but for now it's C or C++.
I imagine you could put together a native shim that would let you write a managed driver. I don't honestly know if the interfaces we've defined will work through COM Interop or if you'd have to provide hand-built wrappers.
The WMP 11 installer is speaking the truth. UMDF is also being used by WMP on XP.
The 1.0 version is for XP only (SP2 or later, or Windows Professional 2003 SP1 for x64) and it's used by the Media Transfer Protocol drivers that WMP ships. The plan is that this will RTM in late Sept/early Oct of this year. Although it's general release, it's an early snapshot of UMDF and i suggest anyone who really wants to use it wait for UMDF 1.5.
The 1.5 version is for XP (same limitations as 1.0) and Vista and the redistributables will ship as part of the Vista WDK.
We're also concerned about the performance aspects of allowing managed drivers. Keeping the footprint of an idle managed driver host low would be a priority of ours. The best driver doesn't consume any resources when it's idle.
One of the reasons i'm eager to get mangaged drivers working is actually to help with some of our performance issues. Right now we run one host per device to provide isolation. Iff the work the CLR team has done to improve app domains is as good as they say ( ), then we could potentially use those to isolate device stacks from each other and reduce the number of host processes required on a machine.
Not that it's the only option for pooling, but it's an attractive one for a number of reasons.