On the subject of strict or linient evaluation.
It seems that an adanced enough run-time can and should use both, based on the accumulated "knowledge" (stats) about workloads being executed.
Expectation that something can be strictly evaluated in false in absolute sense, because each and every CPU instruction and/or memory read/write may fail because of faulty hardware. Yet, it can be statistically true. If hardware is somehow known to be 99.something%
reliable, such assumption can be made safely (in statistical sense), otherwise nothing can be computed or done ever.
(I believe that proponents of strict evaluation are stuck because they base their reasoning on incorrect assumptions without explicitly stating what those assumptions are, which is a known issue that plagued physics for centuries, and most likely still does)
The same must apply to the algorithms as well. If algorithm is known to be predictable on a given workload (either statistically or by devine intervention of the mister human), it's OK to evaluate is strictly. If there is no prior knowledge, lazy evaluation
is the way to go and please gather execution stats upon exit so it can be reused in the future evaluations/executions. And if it does not exit in the requested amount of time - abandon (preferrably kill first) the execution and and black list it (till the
end of time or the next devine intervention).
From 10000 feet it looks like a nice logical schema with a feed back loop, which is statistically a necessity for each and every successful eco system (observe the nature).