So does this mean that C# will be getting the equivalent of VB.NET's "When" keyword that can appear after a Catch, to control exception filtering?
To briefly recap the issue, in C# today we can only filter exceptions by their exact type, or by their location in the type hierarchy (catching a base type). Unfortunately that isn't very useful because exceptions are not generally organized into useful
Deriving from ApplicationException to indicate "non-fatal" is now discouraged, for example, so in fact it should be expected that custom exception classes will all derive from Exception. Meanwhile, due to the lack of flexible exception filtering, the
Exception Handling block of the Enterprise Library suggests catching the universal Exception base class, which directly contradicts
the advice of the CLR program manager. The CLR team also suggest using VB.NET (or hand-written IL) to get access to "Catch/When" functionality from languages that don't have it such as C#.
Yes, I saw that Hoare quote. That's the same conclusion I get to in that blog post, i.e. "It's not immediately obvious that it is
[a bind function], because a Bind function talks to its caller in terms of values wrapped in the monad, but deals in "naked" values with the function passed to it. But
IfNotNull uses ordinary reference variables in both situations. This is because there is a feature missing from C#. It ought to be possible to somehow tag a reference variable to say that it is definitely not
In Spec# they used MyRefType! (exclamation suffix) to mean non-nullable; so the default is wrong but they were aiming at backward compatibility with C#, so they had to go that way. And then they would just look for an if (myVar != null) and silently make myVar
non-nullable within the truth block.
It looks like the closest we'll get to this in real C# is the Code Contracts library in CLR 4.0, but I really wish non-nullable was the default. Unfortunately it complicates the syntax in some areas (where should non-nullable instance fields in a class be initialized?)
but it would surely have been worth it, as Hoare says.
Given that reference variables can already be null or else point to something useful, don't they already have what is needed to represent the maybe monad? All that's lacking is the convenience of bind.
An extension method can act as the bind operation and that's all you need, isn't it? Or have I missed something (I'm relatively new to Haskell!)
Steve, I want the same thing, an extensible language. It's a real shame that the "syntactic sugar" (anything that can be specified in terms of other language features: the using statement, the Linq queries) was not built using a macro-like system that
user could also take advantage of.
I have a question. I had assumed that the definition of "bad" would be "a function that has side effects". But Brian gives the key definition as of a mathematical function, which is one that returns the same value given the same arguments. This is much neater
because it means that you don't have to decide what "side effect" means (could mean a lot of things). But then I realised that this means that a mathematical function CAN have side effects! It just mustn't allow anything to affect what IT returns for a given
set of arguments - but it is okay for it to affect what other functions will return. Those other functions would not be mathematical, because they reveal the side effects. So a mathematical function doesn't reveal side effects via its return - this says nothing
about causing them.
So my question is, as long as "printf" returns the same value for a given argument, what does it matter if it has the side effect of printing text out to the console? Why can't I add it to my program harmlessly? Previously I was using all mathematical functions,
now I'm adding a printf call somewhere, that's also a mathematical function, so nothing has changed.
(Or can I deduce from this that haskell's equivalent of printf returns a new modified instance of the console every time you call it...? Semi-serious question!)
That point about a base language that can support different syntaxes though library-like extensions - surely that has to be the way to go, in the long term?
We already have an ever-growing range of APIs in the CLR to let us dynamically compile code snippets into executables. The C# and VB compilers are "libraries" in that sense. They need to be reusable in different contexts, e.g. partial compilation for IDE intellisense
as well as the "real" compilation process. And so why not implement those two languages as AST processors on the same general compilation engine. And then introduce a way to let you switch syntax libraries in the middle of a file, or in an expression.