This is truely excellent, really good to see this kind of thing on Channel 9.
With regards to the text rendering, did any of the DX10 specific optimizations ever arise? A long time ago, in the Avalon days, i remember there was talk of moving the text path completely to hardware in DX10 class video cards, as opposed to the mixed hardware
cleartype and software glyph composition / alpha blending in DX9 paths?
It's great to know that WPF is continually being updated and improved, as I personally believe it is a really forward-looking technology. I really hope the new WPF Yahoo messenger actually comes out sometime soon, as, for me, it seems like the first mass
audience consumer level WPF app.
On a technical side, I remember seeing, around PDC '06 time, a series of slides stating that WPF would work via DX10 (where possible in hardware) to fully move the text-rendering path into hardware. This means that glyph performance would be up there with GDI,
if not even faster. It also meant better GPU task allocation thanks to the advanced scheduling features of DX10-class GPUs. Is DX10 support for the presentation backend coming in this 3.5 release? If not, is it planned to come sometime in the future?
I'd be a real shame to see WPF fall into the same trap that GDI Acceleration did, of being very good at what it was, but locked to a very specific set of basic hardware functionality that went out of date around '98, and was never extended, perhaps with the
exception of hardware alpha transparency in Windows 2000.
This is an interesting video, and great to see something from Microsoft UK. I have to say, i've never quite understood the general animosity between designers and developers. The words 'chalk and cheese' spring to mind, but I think a lot of it stems from
bruised egos and misplaced confidence on both sides of the fence. Most programmers fancy themselves as a bit of a designer too, most of us having done more than enough UI design in our time, and a lot of us being given total freedom over it on smaller projects. This leads to something which often seems to cause upset, namely programmers fiddling with the designers ideas on the quiet as we see fit during the implementation. I have to say, on reflection, I dont think i've seen designers fiddle with the code.
Having a good team is adsolutely critical, you should be able to get to the situation where the programmer trusts the designer's understanding enough to implement his or her designs as laid out, as he or she probably knows much more about design that you do,
a fact which is difficult to accept sometimes. It's like putting on your best smartest clothes to impress, only to be shot down and bluntly told your style is awfully 90's.
I'm not saying that there should be a chasm dividing the them, far from it, thrashing these things out together is by definition 'working together', but there has to be a level of respect for each others role and judgement.
It's also very interesting, coming from an ASP.NET development side, that windows development is moving so closely towards a similar idea of the separation of code and layout. I mean, thats been happening the web development world for a long time. Web Development
agencies who make a business of providing only layout and graphics are commonplace, but UI development agencies? I haven't heard of many, except for the excellent
frog design who seem to have quickly taken up WPF for use in real projects. This separation, i think, will open up a huge new market, the same way we have 'We'll do the xhtml and graphics, you do the coding' kind of agencies now, except this time it'll
be 'You do the C#, we'll do the xaml'. This is even more apparent when you consider the market positioning of applications such as Blend and Interactive Designer. A good example of what can be done with this kind of approach is the new
Yahoo! Messenger for Vista , a taste of things to come, or so I hope.
I guess it depends on what you understand 'tear-free' to be. I think the definition meant here was 'in sync with your displays v-sync and wthout a slow redraw of the background window', which the DWM does pretty well. As you say though, resizing is not
a good experience, but I think any OS is much the same. In fact, IIRC, Mac OS X was particularly bad at this, despite having beautifully smooth window dragging even on a GeForce 2.
Moving a window, with the DWM, just means moving a textured surface over the one beneath it. In GDI, the surface was moved, but the one below has the extra step of having to be redrawn again. Resizing a window is a different problem, becuase it means all the
contents inside have to reflow and redraw. This involves quite a computational and rendering overhead, especially in rich evolved GDI apps like Windows Media Player.
Photo gallery appears to be a standard win32 app according to its files, although it does make use of 'glass'. I certainly agree about Yahoo Messenger, quite why Microsoft didn't have a few impressive WPF apps lined up is strange, but haven't we seen the same
pattern with .NET 2.0?, at least in terms of mass-market apps.
I think Microsoft is positioning itself to provide the best development platforms for these future applications, and I also guess there must be just too much work invested in the older win32 architectures and applications (think: Office). In terms of development
tools, I dont think anyone can choose a better environment than VS, and Microsoft seem to be heading for it again with Expression Blend, the WPF development tool, which *is* written in .NET and uses WPF, and is really quite amazing, as development tools
Having said that, it seems like a big oversight that there were no WPF apps shipping with Vista, and it could have made a lot of the initial reviews a lot more favourable for its Aero UI, especially when inevitably compared to Mac OS X's UI (to which i think
everyone will agree, WPF is technically superior)
Having said that, they may have a few ready and waiting for the January 30th launch
DCMonkey wrote:The DWM makes window dragging look great, and the Glass and Flip3D are neat looking, but I'm really dissapointed with the quality of window redrawing while resizing a window, especially for windows with client area glass
like Windows Media Player. Resizing WMP on my system leaves behind an ugly black ghost of the glass area at the bottom of the window trailing behind as it attempts to keep up with the window redraw. Frankly it looks much better with the old non-composited
Is this going to get fixed anytime soon or in the next version of Windows?
Completely agree. I was dissapointed that the contents weren't double-buffered as they are in OSX to avoid any kind of flickering, but it's really surprising how bad it looks - WMP11 is the biggest offender.
I would hope that this is a driver issue, but MS hasn't exactly paid that much attention to the niggling details like this in the past.
Well, this isn't really the fault of the DWM system in Vista. Admittedly, its marginally better when running in classic mode (no black section, but still slow to resize), but the root cause seems to be the fact that if you resize WMP quickly on a fairly slow
machine, it becomes apparent that the black section you're talking about is in fact the 'classic' black non-glass WMP controls, buttons and all, still being rendered in the background!
This is more a case of poor software development than anything else.
If you try resizing with a proper WPF app, such as the New York Times reader, you'll find it a much better experience, because remember that although you are running a composited desktop, inside the composited window, unless the application is WPF, it's all
still GDI based rendering.
I dont think Vista ships with any WPF applications off the top of my head. In fact, for that matter, i don't think it ships with any .NET 2.0 executables either.
I think we all often forget that what we currently know as 'Vista' isn't the full picture by any means. WPF is a key part of it, an amazingly powerful platform, and is currently 99% unused, even in Vista itself. A few months down the line, we'll have DX10
graphics cards, designed with WPF in mind, accellerating even more of the WPF framework than we do now with DX9. It's going to be big, and what we've seen so far is nothing.
An excellent video, really informative to watch, however it does leave some questions unanswered.
What about automatic jack senseing? I can just plug in a mic, and it's detected as a mic, right? same for my other speakers.
What about HD Audio? OK, so we get WaveRT DMA implementations, but what else? It would be nice to hear about some other HD audio tech, like the jack sensing above.
How about DVD Audio Discs? how is that going to be handled? Thats 192Khz, right?
What about multi channel setup?
And finally, hardware mixing accelleration?
I appreciate this article was really to give software developers an insight into the various structures of the new windows audio platform, but perhaps in a new article you could cover some of the points above.
What you say is most certainly correct, and i do agree with it, but its not quite the point i wanted to make in my original message. You are right in saying that even on single core systems multithreaded apps will get a performance benefit because of better
handling of blocking and such like (and all the more on HT), but on that point, that doesn't really make much use of the power of dual cores. The same can go for applications which run the GUI in one thread and the process in the other, having owned an Athlon
MP system, the difference is very small. Very very rarely does any multithreaded app utilise more than 50% of the dual processors' time (i.e. 100% of the timeframe of one CPU), showing little work is being spread onto the 2nd processing unit (unless one runs
a specifically SMP aware app, of which there are frighteningly few)
For dual cores, for any significant speed-up to happen, its not going to come from multithreaded IO queueing or gui thread/app thread, as this load level is already well handled in the context of a single CPU, so to speak. As you correctly say, in the server
enviroment, this is great, one can create many threads for client requests, etc, but on the desktop, the only benefit is likely to come from multitasking, in some situations, for example, compressing CD audio to WMA while trying to play a video. And i think
thats how dual cores will be marketed. Thats not a bad thing, but its not going give any single app a definate speed increase, something which most home users are looking for.
Eventually, multi-core aware applications will arrive that make use of both CPU's in one application context, that aren't either high end video encoders or renderers, but until that time comes, most dual cores are likely to go rather underused.
This leads to my second argument that, becuase even programming 2 threads is difficult enough, software will likely be written with only 2 cores in mind. When highly multi-core processors arrive in 2010 or sometime around then, we're back to the same problem
that on a 4 core system, for example, only 50% of the CPU time would be used (2 cpu's worth), unless the developer specifically recodes the app to create 4 threads.
What is needed is a way to mark, in code, any task as parallel-izable, without going into the thread level in the program. The kernal should be able to create enough threads to occupy all the processors based on some abstracted defition of the parallel task.
That would mean that however many cores one's CPU has, an application with an explicity parallel-izable section is automatically interpreted to use all of the available computing power on the host system. I dont know how it would be done in pratice, but its
just an idea.
This was an extremely interesting video, i'll be looking forward to the next ones.
Dual Core solutions are obviously the way the world is going at the moment, and i would be interested to know what kind of updates in the kernal space are planned for Longhorn.
If, in the future, we are heading towards massively parallel computing in the home, it's difficult to see how the pace of application development will keep up. For instance, with simple core speed increases, one's application will always run faster. However,
to take advantage of 2 cores needs some multithreading work as is being done now in limited amounts, but go over 2, and the only things that benefit now are distributed rendering and science applications. Hardly home use stuff. If our desktop in 2010 has 8
cores, for instance, one's daul core (2 thread split) multithreading aware game or application is unlikely to see a performance boost. One would need to code with an 8 way load split in mind. So, each generation of application is going to become increasingly
targeted to its generation of processors (2 way, 4 way, 8 way), etc, because coding for an architecture more parallel than the one you are using would be a waste of time for developers who often only forsee their application use within the time frame of the
respective current generation of processors. I think it would also be very difficult to split something other than a rendering app or science app into something that would execute simultaneous on 8 cores, especially if the onus has to come from the developer.
To be truely future aware, you would need a system that could take any task that is marked as parallel-izable and split it into the number of threads matching the cores of the processors you have on that system. Otherwise, like i said above, you will end up
with a lot of apps targetted at dual core, then quad core, etc, with no application able to take advantage of a processor with more cores than it was written for originally. The parallelizations need to be abstracted from programmer-written threads into some
new structure which can be automatically parallelized into the required amount of threads to suppoer the cores on the system
So, i ask, wouldn't that be a task for the kernal in the future?
I hope that made sense, i'll push post and hope someone understands it