If i look to this video i can not prevent myself to think that there is nothing new here and i am quite surprised to see that Microsoft is so late in this. Yes Apple (now i am sure that many windows fanboy will treat me of mac troll, but anyway!!!!) have been doing many work on data parallelism for many years now. I mean Apple has been working on APIs for SIMD programming for many years that provide data parallelism for image processing, scientific application, signal processing, math computing, etc..... This API is called Accelerate framework and it just do all the job for the developper. No need to worry which architecture your programm will run (Powerpc or Intel), the APIs does the optimisation for you, the vectorizing for you, and the architecture dependent optmization for you. No need to worry about data alignment, or vector inctrcustion, etc... It just provide the all abstraction, and this certainly why SIMD computing has been far more spread on mac compared to windows. On pc you could use Intel vectorizing tools, but that's expensive and still the level of abstraction is not quite high or as high as a developper would like to be. Now talking about GPU processing, i can not see anything impressive in this video. Apple (yes again Apple, sorry!!) is already proposing TODAY (not a research project) an object oriented API for high-end application and data-parallelism computing. CoreImage and Corevideo does just that. They provide an abstraction model that provides to the developers a model for GPU programming, CoreImage uses OpenGl, OpenGl Shading language and works on programmable GPU. Developpers do no need to know how GPU works or how OpenGl works, CoreInage and CoreVideo provide all the abstraction with an object oriented programming model built with Cocoa. You don't need to know about graphical programming and computer graphics mathematics either, CoreImage/Video abstract all of these. Moreover CoreImage/Video does the optimization on the fly for a given application, depending on the architecture on which the program runs. It does optimize and scale performances depending on the ressource you have. In another words, it optimizes for the GPU if the hardware allows it, otherwise it will optimize for Altivec (SIMD computing) on G4/G5 or for SSE on Intel. It will also optimize for multi-processors machines or multicores machines if it needs/can do so. CoreImage/Video also provide a set of built in Image Units that perform general graphical effect, blur effect, distorsion, morphology, you name it, all running on GPU. CoreImage/Video use a non-destructive mechanism and 32-bit floating point numbers. The architecture is completely modular, any developper can buld it own image unit. Anyone call download a test application named "FunHouse" in the Apple development tools that performs REAL TIME image processing using the GPU. Much more impressive compared to their demo i would say. And more important high end applications like Motion and FinalCut Pro 5 Dynamic RT technology leverage CoreImage and Core Video, you get real time graphics and video processing!!! So i don't really think that what is shown in this video is new or a breakthrough (sorry!!!!), particularly when it is still a research project when CoreImage and CoreVideo already does even more and have been available for more than 1 year now. I would really advice people interested in Accelarator to have a look to CoreImage, CoreVideo too, they will find a state of the art GPU based data precessing and data-paralellism technology. Its not the future, its now.... Last point, in the video there is something that i don't agree. One of the guy said that scientific computing could be done on GPUs. I don't really think so, at least depending on you needs. I am geophysicist, specialist in fluid modelling and continuum mechanics. In most (if not all) scientific modelling work, double precision math is required to achieve acceptable precision for the results. The problem is that CPUs do not provide double precision floating point numbers support in their execution unit. They do provide only (so far!!) simple precision math as it is enough for 3D modelling and games. What i mean is that the vector units in the GPU (yes GPUs use a SIMD model for their execution unit, that's why they can achieve a high order of parallelism in data processing data) only support single precision floating point numbers. This is not enough for most of the scientific applications today. Now there are many research out there on how to use GPUs for non-graphical calculation involving large sets of data, but so far nothing really usuable forr scientifc computing. Apple had similar problem with Altivec becasue it does not support double precision floating point vectors, which prevent the G4 to provide vector computing for double precision floating point numbers. Some of the Accelerate APIs can do some double precision operations on Altivec but it was limited to some specific operations like double precision Fourrier transform. So the GPUs have therefore a similar problem, they do not scale well for double precision floating point computing which limits their use in scientific computing. On the other hand, this does not mean that some interesting work can not be done with the GPUs outside of the graphics world. There are some proposals on taking advantage of the GPUs power to encode or decode MP3 files, MPEG4 files, etc... Some ATI card do some H264 decoding in hardware but we could imagine to use the GPU to also encode H264. Another application is of course animation. Animation does require a lot of data paralelism computing, and GPUs can help a lot in that. Leopard Core Animation is a good application of what can be done.