Lazy<T,TMetadata> actually extends Lazy<T> and no it won't end up in BCL, for Beta 2,
it's actually in
System.ComponentModel.Composition. We'd considered the implicit cast operator but because we're executing arbitrary code we determined that's a bad idea. It's only complicated by the fact that the valueFactory delegate may thrown an exception.
Lazy is a pretty well-known CS concept and "Cached" doesn't seem to capture the most important function of the type -- you can have something that's eagerly evaluated and cached as well.
There is no way to invalidate a specific Lazy<T> instance. We wanted Lazy<T> to be as observationally pure as possible and behave just as if it were any other variable. For example, in C#, if you can't create an int of 4 and then
invalidate it -- you simply reassign it. The same goes for a Lazy<int>.
As for LINQ query, I'm not sure I understand the question.. When you use a Lazy<T> in a LINQ or PLINQ query, it will be evaluated whenever the execution occurs Value is called.
You're right on the money -- lazy language integration would be sweet, so sweet in fact,
that F# has it. As for C#, it's certainly something that the C# team has and would considered but it takes a lot of effort and a lot of vetting to get a new feature into a language. The type hasn't even been RTMed yet! That said, the C# team always
loves feedback and feature requests:
Great question and something we actually thought of considering. As you've pointed out, there is more than one way to pass data from a consumer to a producer. If we think individually about the communication on either side of the data structure (P for production,
C for consumption) as being synchronous or asynchronous we come up with four different styles: P(sync)/C(sync), P(async)/C(sync), P(sync)/C(async), and P(async)/C(async). I believe you're referring to the last one.
P(sync)/C(sync) is essentially a single item data structure that synchronizes the producer and consumer in lock step with each other. You can think of this as a two people in the middle of a bucket line trying to put out a fire. The person closer to the
beginning of the line will not grab another bucket (more work) until the person next to them takes their bucket and the person closer to the end of the line will wait and not do any work until a bucket is available. This can be achieved in .NET 4 by creating
a BlockingCollection<T> and bounding it with a capacity of 1.
P(aync)/C(sync) allows the producer to drop off items as fast as possible asynchronously but the consumer will block until data is available. In our fire brigade example, this would be analogous to a single person throwing water on the fire and everyone
else grabbing buckets of water, dropping them off near the thrower and returning to get more. This is the default mode supported by BlockingCollection<T>.
P(sync)/C(async) is all about throttling. Producers can only drop off as many items as their are available slots before being blocked but as soon as work is available, a consumer is created and given the data. Think of this as the previous example accept
that there are a limited number of buckets and so the people grabbing the water must wait until a bucket is emptied before filling it. Instead of a single person throwing water, there is a pool of people sitting around and whenever a full bucket appears,
one of them runs up, throws the water and returns to the pool. There isn't any built in functionality to support this in .NET 4. With your idea of an event, this could be supported on BlockingCollection<T>.
P(async)/C(async) is the last model and the one I think you're most interested in. Producers can drop off as many items as they'd like and as soon as work is available, a consumer is created and given the data. This is the combination of an arbitrary amount
of workers grabbing buckets and dropping them off and an arbitrary amount of workers being handed buckets as soon as buckets are available. This is the most scalable model and it is also the model that is supported by the CCR's Port types.
That said, there are trade-offs to all of these models. Speaking strictly from the consumer side, ordering becomes a big issue. Blocking a thread allows you to easily maintain some state on a given thread and guarantee that you'll process messages in the
order that they arrive (provided that the producer keeps the messages in order). This is absolutely essential for some streaming scenarios (like encryption) and simply cannot be maintained by completely asynchronous communication.
And so, after going off on a huge (but hopefully informative) tangent, in short, we realize there's a gap in functionality but didn't want to cram every scenario into a single, bloated type. We're hoping to fill some of these gaps in the next version of
.NET but, until then, the CCR may be of benefit to you.
Thanks for the feedback. This example was meant to be a five-minutes-or-less taste of what Axum looks like. It's pretty difficult to show any complex concurrency in 5 minutes. Keep your eyes peeled though. They'll be much more coming in the complex department.
Networks in Axum will rock your world.