CCR at MySpace

Sign in to queue

Description

MySpace has done some pretty amazing things with the Robotic Developer Studio.  When they found out it contained a very powerful component, the Concurrency and Coordination Runtime (CCR), the architects built it into the architecture of MySpace, the largest .NET site in the world.  At MySpace, I met Principal Architect Erik Nelson and Senior Architect Akash Patel who walked me through how they were using the CCR. 
If you have an MSDN subscription, you are a student, or you are at a qualifying startup, you can now download and use the Robotic Toolkit as part of the program.

Embed

Download

Download this episode

The Discussion

  • User profile image
    ScreenPush

    What is the average throughput for Myspace? How much has CCR improved performance?

  • User profile image
    ErikNelson

    Those are both pretty broad questions. I can't actually give an answer to our total throughput, but that's a combination of many technologies, so it's difficult to say how much CCR affects it directly.

     

    As to how it improved performance, it was used as a part of a rearchtecture that happened concurrently with the site growing in size many times over. Since our load increased dramatically during the time period we were implementing it, and its use came along with other changes in our middle tier, there really isn't an apples to apples comparison that can be made.

     

    I apologize for the non-answers here, but if you have more specific questions, I can try to address them!

  • User profile image
    Aayush Puri

    Hey Erik - In the presentation you mentioned that even the search team in MySpace is using CCR. Could you elaborte a bit on it?

    As you menntioned it made sense to use CCR to improve throughput in the communication layer when you have a lot of messages and data to transfer between a lot of components.

    So how is it that search team benefits from the high concurrency that CCR enables. Is it that some kind of map/reduce paradigm being used to query a lot of nodes/documents parallely?

  • User profile image
    mgarski

    Within our search infrastructure we use the CCR to manage concurrency in our processing pipeline to assign indexing tasks to a pool of workers.  The benefits we've receieved from using the CCR are that it simplifies concurrency management and provides a very high level of throughput.  At this time we are not currently using the CCR during search execution but are examining ways in which we can.

Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to send us feedback you can Contact Us.