The C++ Concurrency Runtime - Parallel Patterns Library

Play The C++ Concurrency Runtime - Parallel Patterns Library
Sign in to queue


The C++ Concurrency Runtime is new with Visual Studio 2010 and currently in beta. The runtime encapsulates and extends many new operating system features including NUMA resource locality and User-Mode-Scheduling. 

The Parallel Patterns Library (PPL) provides an imperative programming model that promotes scalability and ease-of-use for developing concurrent applications.  The PPL raises the level of abstraction between your application code and the underlying thread/task scheduling mechanisms by providing generic, type-safe algorithms and containers that act on data in parallel.  The PPL also enables you to develop applications that scale by providing alternatives to shared state.

The PPL provides the following features:

  • Task Parallelism: a mechanism to execute several work items (tasks) in parallel.

  • Parallel algorithms: generic algorithms that act on collections of data in parallel.

  • Parallel containers and objects: generic container types that provide safe concurrent access to their elements.

By using PPL, you can introduce fine-grained parallelism without even having to manage a scheduler.   You would use the Asynchronous Agents Library instead to express coarse-grained parallelism.

You'll want to subscribe to the Native Concurrency blog, find more resource and download example code from Code Gallery



Download this episode

The Discussion

  • User profile image

    What this example showed is that those same "subgroup of senior specialists" will be responsible for writing the bodies of the parallel for statements, to make sure that no race conditions sneak in. So, in the end, does this really solve the problem or just make it easy for people who don't know how to write concurrent code think that it is now easy to do so (with parallel_for)?


    Also, how will the PPL and all of the mumbo-jumbo (i.e., Concurrency Runtime) it sits on handle parellizing code in multiple concurrent processes? Or, are we supposed to, from now on, run only a single "parallelized" process on a multi-core host?


  • User profile image

    Hi Corrector,


    Thanks for the comment!   I apologize that I just noticed this on the blog.  


    I suppose that's part of what the industry, not just Microsoft, intends to solve...   making the expression of parallelism easier and safer for every developer.  


    Of course, this demo is just introductory and not intended to address solution domain complexities in-depth.


    Parallel_for is simply a mechanism to allow the programmer to express concurrency more easily.  It frees them from writing the plumbing-code that so often goes with concurrent code and maintaining it.  So yes, it does lower the barrier to writing parallel code.  We see this as a good thing.


    Does that mean that some programmers will write parallel code who should not be writing that code?  Perhaps, but this is not the fault of the runtime system.  This is a skill-development-problem and falls to development teams to solve, maybe through code inspection and design guidelines, for example.  Potentially, tools can help with best practices (e.g. see the new VS2010 parallel performance profiler). 


    Our other option is simply to stop developing new features in existing applications or ask the users to put up with slower applications.


    The question of multi-process concurrency is interesting and there are tools and techniques that can be used to accomplish this (e.g. Windows HPC Server and the MSMPI SDK).   We recommend starting with domain decomposition and implementing execution partitioning techniques that map onto the scope of the parallel computing problem.   As you know, some computations simply can't be solved without multi-process or multi-computer parallel processing.  However, the C++ Concurrency Runtime or .NET Parallel Extensions may still be significant components of even a distributed process solution.   These technologies are specifically designed to express concurrency at the application scope and with an implementation that is highly optimized for resource management, shared state, and thread-level scalability.   Even if multi-process scheduling and synchronization designs were added, the existing mechanisms would still apply and be necessary.


    - phil


  • User profile image

    Good :)

Add Your 2 Cents