It seems that an adanced enough run-time can and should use both, based on the accumulated "knowledge" (stats) about workloads being executed.
Expectation that something can be strictly evaluated in false in absolute sense, because each and every CPU instruction and/or memory read/write may fail because of faulty hardware. Yet, it can be statistically true. If hardware is somehow known to be 99.something%
reliable, such assumption can be made safely (in statistical sense), otherwise nothing can be computed or done ever.
(I believe that proponents of strict evaluation are stuck because they base their reasoning on incorrect assumptions without explicitly stating what those assumptions are, which is a known issue that plagued physics for centuries, and most likely still does)
The same must apply to the algorithms as well. If algorithm is known to be predictable on a given workload (either statistically or by devine intervention of the mister human), it's OK to evaluate is strictly. If there is no prior knowledge, lazy evaluation
is the way to go and please gather execution stats upon exit so it can be reused in the future evaluations/executions. And if it does not exit in the requested amount of time - abandon (preferrably kill first) the execution and and black list it (till the
end of time or the next devine intervention).
From 10000 feet it looks like a nice logical schema with a feed back loop, which is statistically a necessity for each and every successful eco system (observe the nature).
Actually, it's quite obvious. C#, Java, C++, C are sugar coated assembler. Reasoning about assembler, even sugar coated is a lost cause. Making those languages into something that can be reasoned about at compile and especially run-time would be practically
impossible because of long hairy legacy that those languages carry around.
In order to run a program on a parallel hardware, run-time would have to reason about side effects to come up with some strategy to partition computational graph into work loads that have minimal interactions between each other.
If many core processors will have cores of different capabilities (which seems to be the case), run-time reasoning and JIT will be a necessity.
It seems like none of the existing imperative languages would survive transition to parallel era. Of course run-times are still be written in something that is sugar coated assembly, yet for general-purpose programming completely new languages would be required.
Declarative and richly typed presumably.
Also to the point of run-time reasoning and code generation, to provide fault tolerance computational graph might need to be re-evaluated if a computation node returns exceptional value or goes into non-termination state. That in theory would allow automatic remediation
for run-away queries in databases and handling of non-responding services in the cloud (as well as mutating hardware - failed or hot plugged general and special purpose CPUs, failed or hot plugged memory and so on).
It probably will take another 10 to 20 years to get it right, but it looks like that's where things are going.
Luke, for the sake of correctness, Debug.Writeline call was added to the MulTask() method, which was executed after the sequential and PFor multiplication. So, PFor loop ran without any blocking on screen output, and it was slower than sequential presumably
because of all the extra work associated with priming up parallel execution environment.
If 90% of the input for my app on any given day happens to be small, it's better to process those 90% sequentially and use parallel execution only when appropriate, but for that it would be nice to know, where (approximately) is that cutting point.
I can run tests and collect some stats on what overhead of firing up parallel execution is, but assuming that this work might have been already done while developing the parallel framework, it would be preferrable for me to look at the stats collected by
the PF development team than spend time and efforts myself.
One thing though, when matrix multiplication was executed on a small data set, sequential was actually faster than ParallelFor (@20:48). Are there any insights on estimating an overhead of setting up parallel execution machinery, so, application could attempt
guessing whether to process data set sequentially or in-parallel (assuming that application knows the size of the data set)?
Thanks Mike. It sounds like registering contracts with the framework would be the core enabling technology for proper blame assignment and proactive failure prevention. It almost can be read as you guys are planning start working on that
I wonder if it is or someday would be possible to interrogate a method about its contracts at runtime, so the caller could ensure compliance before actually invoking the method? E.g. before sending big batch of data over the wire for pre-processing and loading
into a database in one transaction, I get an abstract code tree from the transformation service that represents all or at least some of the checks and run them locally and perform corrective actions proactively.
Measurements in F# is an exciting feature. No doubt about it. I was really impressed when I learned about it being added to the language.
It surely addresses a lot of potential issues with measurement mismatch.
Yet, run time support for measurements still makes a lot of sense.
If you are reading data from external sources at run-time (files, sensors, or web services), you'd still have to implemet all the measurement tracking and conversion yourself. If this can be married to contract somehow, then application would just have to
tell the framework that it expects mass to be in kilos, and if input feed turns out to be pounds then conversion would happen under the scene.
Another presumably useful feature would be to declare measures off of classes/types. If for example, I'm counting my chicken, I don't want to be able to inadvertently add this to the count of eggs, unless I explicitly coersed chicken and eggs to be "things".
Anyhow, compile time support is a very good start.