I'd second staceyw's
comments and add a few of my own.
Once I understood the model, I did stop thinking about concurrency per-se, in the sense that I stopped worrying about threads and locks. I still have to worry about state, because its my responsibility to *not* schedule simultaneous writes to the same memory,
but this scheduling is *much* easier to reason about than before. I suppose in this sense, I am thinking about coordination and not concurrency. Although I get rather nice core utilisation for 'free'.
Programmers unfamiliar with the CCR are often scared off by the perceived implications of a message-passing model. But the CCR's lightweight model is still very efficient, and whilst the straightline traditional model might be a few percent quicker, but I'll
take the CCR most times because in the long-run I personally get a better overall result, not just in terms of performance, but also failure handling, robustness, scalability and clarity. And even if you just used it to introduce some asynchronous I/O into
your app, you'd be amazed the difference that alone can make.
Interestingly, where you do still have to think about threads, locking, blocking etc, is around the boundaries where you are communicating with some non-compatible api/threading model, say COM, or WinForms/WPF. I don't know when (if ever) the user interface
will move away from a model of thread-affinity, but on the client-side I think there's always a bit of a jarring switch between CCR and UI. It's doable but not entirely satisfying.
On the point of failure, as soon as you move to an asynchronous message-passing model, you pretty much can't assume that your message will even be delivered, (b) it will be delivered correctly or within time constraints, (c) the object will handle it correcly,
(d) any response gets delivered (e) any response get's delivered within time constraints. You can ignore any one or more of these conditions, but you're system will live-lock pretty quickly if you're only ever waiting indefinitely for the successful response.
The CCR model makes you consider these possible failures, but through its arbiters and causalities gives you the mechanisms to deal with them. And this leaves your code in much better shape when you really physically distribute because you're prepared for
the much more likely scenario of timeout and network failure.
The final point I'd make is kind of related to mash-ups. While the asynchronous message-passing actor/agent model is a good approach for composing distributed systems from a 'temporal' perspective, from a functional perspective, surely mash-ups work because
the operations you can perform against the various data sources (basically GET and POST) are relatively simple, uniform in their behaviour and well understood. Some are even actually RESTful . The DSS model, that sits above CCR takes a similar view of distributed
state manipulation. It would be good to get a Going Deep on the DSS to the same extent that we've seen on the CCR itself, because just using CCR to make your SOAP/WSDL clients/servers more efficient, isn't going to make them more composable.
Thanks Charles. Great watch. And you were as
good as your word about asking George about CCR uptake. My interpretation was that, being outside of the Developer Division they haven't necessarily had access to the marketing muscle available within it. Nevertheless, kudos to you and Channel 9 for following
the CCR over the past 5 years - to paraphrase Erik, there are popular things and there are important things...
I don't know how wide the adoption has been of course. I did say that it *seemed* not to have been wide and of course I could be wrong - wouldn't be the first time - but using the only benchmarks I have, namely (1) the number of articles in the blogosphere
on CCR and (2) the amount of activity on the CCR/DSS forums, it's not an entirely invalid assumption.
Of course it could be that not many think the CCR is worth blogging about, or that it is such an obviously intuitive model that not many need the forums, but I think neither of these is true and your own comments in the video suggest that some developers may
consider the programming model something of a mental leap from where they currently are.
I personally think the CCR is a really powerful model and *well worth* the investment. We have commercial licences for it at my place of work and maybe one day I'll be able to talk about that.
On the other hand, it could be that by helping solve issues both of concurrent systems design and of scalability, everyone is using it, but keeping quiet, hoping that no-one else will. But I doubt it...
It's interesting that the
CCR is the machinery behind all this. It's an excellent piece of work and a shame (to my mind) that its adoption does not seem to have been wider. I think the actor model is certainly one of the better models for taming concurrency, and the CCR provides
a strong foundation for it, amongst other things.
I think anything that Microsoft do to promote this model is going to be a good thing in the long run, and I look forward to a possible future release of Maestro in some form.
The deep isolation present in
Erlang, which supports the actor model as a first class concept, is also available in
Decentralized Software Services, which builds atop of the CCR and is available to C# developers now.
For those interested in the functional perspective, there is a non-blocking asynchronous message passing implementation available within the F# CTP and I've been
toying with using the CCR from within F# computation expressions to simplify the syntax somewhat.