Oh heck I give up.
E-mail me at firstname.lastname@example.org
In 100% seriousness, C# 3.0, Channel 9's Erik Meijer, and SICP + lecture videos were all it took to make me into a full-on functional programmer. The only problem is that there seems to be no turning back! I no longer consider jobs that require me to use imperative languages more than a portion of the time. Well, maybe if they let me use C# in an immutable-by-default style with code contracts
Dec 26, 2011 at 8:23AM
@brianbec:Awesome - I can't wait to hear your thoughts on F#. We can trivially implement a nice Rx monad for F#'s computation comprehensions. However, it looks like there are still a couple implementation issues with it that the Rx team might look at.
I'm just now exploring what reactive facilities that F# conveniently exposes. I'm using it for declarative simulation building (think objects that transform themselves through time according to their environment without any manual mutation or effectful message passing). Because there is no classical mutation (and no mutation at all in debug mode), it should allow automatic parallelization of the simulation by dependency analysis. It also should make reasoning about large simulations tractable (you should see the simulation engine behind the Sims... a gargantuan mess from the eyes of a functional programmer, yet taken for granted by OO people). For further detail, AML and DOL.
Dec 25, 2011 at 5:41PM
I am a huge BrianMcn fan, but I wonder why he seems generally unsatisfied with immutable Maps a la F#. You can transform such persistent data structures typically with log n performance. When you consider that Dictionary is constant time, but with a comparatively big constant and memory footprint, the race seems a little more even.
EDIT: Meant Brian Beckman fan But also McN fan too still
Dec 25, 2011 at 4:17PM
I think the three keys for building large systems that can be reliably reasoned about is to a) heavily leverage immutability in the general case, b) use declarative programming with DSLs where valuable, and c) architect for reasonably-grained modularity. Also necessary is the tooling to assist you in reasoning about programs like strong type systems, and / or code contracts. Especially powerful are languages that make it easy to build analysis tools for them (such as DSLs with s-expressions or XML). And yes, then there's unit tests and integration tests. So I guess there's six keys... at least
Refactoring toward this implementation model over time should help get the complexity under control. But for large code bases, it take years and a complete organization and engineering team commitment over that whole period. That seems to be the hard part.
EDIT: Oh, and you also have to tool up for all of it, in terms of build systems and core competencies. I can see why organizations get stuck with legacy implementation models... Phew.
Dec 25, 2011 at 3:58PM
Cheers for profiling BEFORE optimization. I am always struck by how many engineers, experienced engineers, do not follow this. Their code becomes so obfuscated from these micro optimizations that it becomes a nightmare to maintain and change.
Very great work!
Now, just get the first step to compile while making the successive steps necessary only when justified by performance needs. I'll bet very few refinements will be needed in practice even for real time programming (assuming you have a reasonable GC algorithm unlike on the .Net Compact Framework for the Xbox 360).
Neat, of course, but also terrifying in practice.
Simple graphical HLSL programs are a painful to debug, even with great tools like PIX.
Debugging this type of highly optimized code will be a nightmare. I'm not even sure how to write a reasonable unit test for this type of code. We can reason about the underlying machine and look at frame rates, but that's still comes short of the profiling capabilities we're used to on the CPU.
Hopefully we'll start seeing more cores on the CPU so that we can keep doing our physics calculations there, where our development tools are sufficient.
Using a declarative approach, perhaps there's a way to specify the outcome (state delta or some such) of desired changes under certain conditions instead of invoking mutating operators / imperative statements directly. Since you'd be specifying the outcome declaratively, presumably it can be done within a DSL. I've not implemented such a feature before, however, so I don't know if it will actually work...
You just defined a DSL I think.
Perhaps the use of 'Domain' in DSL is a misnomer. What is special about a DSL is not the domain it covers, but the specialized forms of expression it provides to solve a specific class of problems (in this case, defining dataflows). The class of problems don't necessarily have to apply to a specific application domain AFAIK, though that's a common usage.
In short, a DSL captures an expression model. That expression model can be conveniently applied to one or more domains (assuming the DSL is well-motivated).
I've only written a couple of DSLs, though, so someone please correct this if there is a more concrete or better definition.