And this is what people mean about cross-platform being expensive, every additional platform you want to be able to target increases complexity and decreases the options available to you. HPC is pretty specialised stuff as it is, Hadoop really only works
with *nix based clusters (which as of today are easily the majority). There are solutions like Windows HPC Server with something like
MPI.Net but don't expect a generic one-size fits all solution anywhere, HPC clusters are just too specialized for there to be an easy "it just works" solution to all of them.
If clustering is an important feature, I'd have to say you'd be better going with Hadoop right now. As much as I dislike Java, it isn't
that different to C# and given that, odds are, you'll be running on a Linux platform it makes more sense than mono right now - given that there aren't many mature .NET clustering solutions as it is, getting one running on mono is an uphill struggle
that makes very little sense.
This is a client server archecture, and I have very little against Java on the server, but we all know Java sucks on the client. (Eg: Swing, srsly).
So I could make the server software Java based and the client .NET based, but!!! my application is _extremely_ modular. It's basically a small core and a collection of plugins that build on that core, true for both the client software and server software. Some
of these plugins are for perf reasons bindable to BOTH client and server. So using two languages complicates things like crazy, obviousally. Because I couldn't make plugins that bind in two both, and people getting this software will have to condend with the
Java Plugin Framework and Mono.Addins.
I am just going to leave clustering for another day. Which means my program will be gimped for many real purposes (large datasets are not uncommon). This is hard any way you look at it.
It's just upsetting because the kind of calculations I am doing were almost designed for MapReduce. It's almost scary. Hadoop would be a perfect fit.