Suppose it is a naive implementation of sequential access pattern. I'm even thiniking should or shoudnt I create the analogue sample for the my library at
http://plugins.codeplex.com? Because I already have working sequential pattern (in conjunction of lasy evaluation pattern) in use of logging mechanism, when a lot of logging operations each of them is not VERY time
consuming, can resonably slow down OVERALL application preformance. See my SequentialWorkItem<T> implementation in Plugins.Threading library.
Actually I can understand German. Author shows us the way we can use new Threading extensions and Result property in particual in some syntetic sample. Do you really gonna draw a graph of relation (sync-ing) for more than 1 different compenting threads types?
Do it actually works for you? I mean there is a better way to spen our time anyway If guys from Microsoft even forgot this field it is trivial to implement it manually. Anyway, good news that they didn't forget all great Result property.
I think I can hep peuple with my private solution; think of it, my framework uses only several BCL classes, allow us delivers the full power of mult-core multi-threading apps directly from
I used only these BCL classes: AutoResetEvent, Monitor,
ThreadPool and WaitHandle. Framework is designed for both parallelisms: vertical parallalism (number of concurrent items per processor) and horisontal parallelism (number of processors consumed).
It is so simple, that programmer must care about olny about how implement parallelism on a given algorithm using parallel work items. A programmer needs to implement single work item for the algorithm itself and generic type of data used in this work item.
By default, the queue engine tries to scale algorithm horisontally, processing work items to fit the number of virtual cores (Environment.ProcessorCount) in a one single thread, then if it is succeded (so overall number of estimate parallel work item tasks is greater
ot equal to the required number of processors), it scales vertivally, in a way, when th code with the least number of tasks gets the most of the priority in allocation nex work item, so all cores are used virtually equally, depending to the algorithm. But
nothing stops to cutomize the algorithm to run 85% of ("cheap") of parallel work items on a single core, wile other, 15% ("hard") of work items (for example if you have a 8 core i7), on all other processor cores, and, beleive me, it is very easy.
It is about a couple of kilobytes long and several lines oof code, extremely easy to read and understand. I used
PEX, CodeContracts. So it really makes me happy for it! And it just works!