C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Type Classes

Play C9 Lectures: Dr. Ralf Lämmel - Advanced Functional Programming - Type Classes

The Discussion

  • User profile image

    Currently watching and great so far...


    But on slide 20,


    data Expr x => Neg x = Neg x


    you don't really need the constraint in the data declaration, right?


    Actually, they are even considered to be a misfeature by some. (Post found via this Haskell wiki page)


  • User profile image

    Very well spotted!

    You are absolutely right. 

    The constraint is in not *needed*.


    Generally, I use them when I go from closed to open.

    Because they *exactly* carry on the types of the constructor components.

    By leaving them out, you could accidentally write more polymorphic code.

    I would call this vacuous polymorphism and I don't like it.

    So it is a bit more than documentation.


    I left this sort of constraint out for Add (slide 17) because I wanted to show simple code.

    The Add constraints are in the source file, though Smiley


    So indeed, you don't need them.

    They don't make type checking in any way easier.


    Yes, I know that they are debated, and people keep telling me so. Smiley

    But I find them quite useful, and I just continue to use them as long as it takes for either ...

    - others to agree with me, or

    - Haskell designers to remove the feature.






  • User profile image

    I'm not too familiar with it, but how do H98's data constraints compare to GHC's GADTs? The GHC docs argue that the behaviour of GADTs are more useful than H98's data constraints.

    (But since I don't really have experience in this particular area of Haskell, I can't tell how strong this argument is.)

  • User profile image

    Ok, now we are nicely in a wormhole.


    Yes, H98 datatype constraints only *require* a constraint.

    ... whereas GADT constructor constraints make *available* a constraint.

    My code suffers from those required constraints because the type-class instances need to repeat the constraints.

    If they were made available, the code would be shorter.


    I would like to defer GADTs to a later point in time (if I ever get there in this series).

    The issue with datatype constraints has been long discussed but no resolution had been done for Haskell 98/'.

    Now with GHC's GADTs, of course, a fresh option could be explored because it is a different syntax.


  • User profile image

    Thanks Ralf, I now suffer from Type Class envy! This kind of generic programming is way beyond what I'm able to express in my current work software environments but I love to have something to look forward to, as well as having something to tinker with at home.


    I must admit that whenever I see several ways to solve one general problem I become suspicious. Is there perhaps an even more general, better way to do it. It reminds me of the quest for a unified theory in physics (like 11-dimensional M-theory, unifying 5 different 10-dimensional string theories). I guess in programming we already have something like this, like the lambda calculus but are still heavily exploring generic constructs at the higher-level, in particular ensuring types come along for the journey.


    Charles spoke with Gilad Bracha who had an interesting complaint about Type Classes, saying something to the effect of that they change semantics of the language (right at the beginning of the video).


    This is a great series, thanks!

  • User profile image

    I pointed Ralf at that Gilad commentary and he maintains his own position on the argument that he can share with you. Smiley


    Really glad you're finding this content stimulating. We're very lucky to have someone of Ralf's knowledge and experience levels teaching us advanced topics in programming. Amazing series! Thank you, Ralf! Can't wait for lecture 3.


  • User profile image

    For what it is worth I have added a coding option to svn to show the detailed impact of going from Haskell 98 to GADTs. So basically the new Neg datatype and the corresponding Expr instance looks like this:


    data Neg x where Neg :: Expr x => x -> Neg x instance Expr (Neg x) 


    It is appreciated that the Expr constraint is only needed once: in the datatype declaration, but then not again in any instance for Neg. 


    I think the discussion with constraint propagation is often linked with Hughes' paper on restricted datatypes from 1999. I agree that the GADT-based behavior is more useful than regular datatype constraints. Certainly in our example. A similar behavior has been discussed for long for regular datatype constraints but no consensus was reached. Hughes' paper hints at some difficulties of constraint propagation; see Section 7. Anyway, it will be intriguing to see how the Haskell standard evolves.


  • User profile image

    Re: "but [we] are still heavily exploring generic constructs at the higher-level, in particular ensuring types come along for the journey."


    Yes, I think this is a very useful way of putting it.

    Don't have anything deep to add.

    Obviously, you have seen some options doing this exploration.

    Here are my favorite options:

    a) Static Typing Where Possible, Dynamic Typing When Needed

    b) Pluggable Type Systems

    c) Dependent typing



    Type classes sort of allow us to fake pluggable type systems and dependent typing quite a bit.


    One thing that interests me personally is the integration of typing (or verification) with program evolution. That is, it is great, if we can write programs in a highly structured way to receive tons of type-system support, but once we need to evolve the program in unanticipated ways, then we don't want to totally fall off the safe desk. So this is the thing with the expression problem. We may be fond of heavily preparing our program for extension, but ultimately all this effort may be in vain because the evolution scenario is not exactly about data or operation extension. But then again, it is cool to have both forms of extensibility w/o any notational overhead, in which case, we are at least not tricked into any design exorcism. (Alas, the type-class approach to the expression problem does cause notational overhead.)



    Re: Bracha's type-class resentment


    Context -- Quotes: 


    "Essentially you view the type system as a static analysis and you can choose [...]."

    "None of these analyses actually change what the language does."

    "I have this religious point that the type system should not impact the evaluation rules".



    He says that most type systems are not like this. As an exception, he points out that the typed lambda calculus is Ok (presumably because of type erasure, which is mentioned later). He also does not like dependent typing btw, he says because of complexity. Smiley



    To me, there is not enough information to become 100% conclusive on Bracha's opinion, and to start defending type classes. If I would have been there at the roundtable discussion (without roundtable), I would have asked Bracha the following question:


    "Consider a simple C# subset without overloading, perhaps even without interfaces (for simplicity). Now assume a straightforward static type system for this subset. The type system would provide us with the guarantee that any method invocation is feasible at runtime. Of course, there is still some minimal, shared knowledge between type system and runtime system. That is, objects at runtime would need to carry around their nominal class type (say, class id). Also, constructor invocations would need to refer consistently to class ids. Those ids would be used in dispatching method calls to the appropriate implementations based on the class id of the receiver object or the static type for a constructor/static method call. Do you have any issue with this sort of 'impact', if any, of the type system on the evaluation rules?"


    I would really hope that the answer is "no". That is, AFAIK, this sort of type system should count just as a static analysis. This usage of ids should not count as 'a type system impacting the evaluation rules'; it is merely about resolution: we connect constructor invocations and other type references to class declarations, and maintain ids or qualified type names in runtime values.


    In the following, I assume that Bracha is fine with the above OO type system.

    (Otherwise I need to reconsider.)

    So I would argue as follows:


    "But hey, type classes do not fundamentally impact runtime semantics; it is merely an 'implementation detail' that type system rules and evaluation rules are connected. To see this, consider the following option. We maintain a global pool of instance declarations which are indexed by the datatype ids of the instance heads. Whenever we need to apply a type-class function (method), we use a parameter expression to extract the datatype id from the runtime value. (We would need to translate datatype constructors into datatype ids.) In this manner, we can do dispatching for type classes very much in the same way as in the earlier OO case."


    Now, this comparison is a bit brittle, as Bracha or myself would notice. The extraction of a datatype id from a runtime value does not work together with laziness. Also, we are in trouble, if the type-class parameter is not used on the parameter positions of the member function (but only on the result position), and again, no datatype id can be extracted from any available runtime value.


    But all such issues are easily resolved by something that is established in language design and implementation: elaboration. The intermediate representation would just be arranged to contain extra datatype ids that were obtained through type checking/inference; extra parameter positions would be set up to pass datatype ids at runtime where necessary. 


    I don't know how the conversation would go from there. 


    Until further notice, I don't think that anything is wrong with type classes Smiley




    PS: I like Bracha's wording when he pinpoints that a type system should not impact the evaluation rules. I don't particularly like the mentioning of 'change' here and there in the conversation---as if the type system changes what the language does or changes the semantics of programs. This sounds like that the program has a semantics before typing and a different one after typing. However, the standard approach to Haskell with type classes is rather that the program has no semantics without typing. So I don't see where the change is.



  • User profile image

    Hi Ralf,


    Here is is the 64M question: assume I don't care about type safety; can I define a dynamic language that supports type classes? If yes, we can write a paper. If not, I don't want them!

  • User profile image

    Hi Gilad,


    it would be easy to have dynamically typed single-parameter type classes --- if we require that there is one occurrence of the type parameter among each member's argument types (read as "argument position" in dynamically typed speak); if we also have a non-lazy language. In this case, we would essentially use type classes for a kind of retroactive interface implementation mechanism.


    All what I am saying is that single-parameter type classes with the listed occurrence restriction for the type parameter are not particularly advanced. In an OO setup, it would just be another option for adding methods to existing classes. Clearly, dynamically typed languages are happy to provide such operation extensions in other ways (method wrappers, advanced reflection, change boxes, what have you). Now, if we allow for multi-parameter type classes, we get into the neighborhood of multi dispatch (in OO terms), and again, dynamically typed languages have covered that ground.


    So I guess the particular contribution of type classes is that they can do it with static type safety.


    Writing a paper sounds cool; we probably need to push for a bit more material.




  • User profile image

    great series so far. Really enjoying this

  • User profile image


    really love the series so far.

    I still hope for typeclasses in the CLR Wink

  • User profile image

    So is this, like, the Haskell encoding of CLisp's defmethod?


  • User profile image

    Yes, Common Lisp is way more powerful (and a tiny bit less typed) Smiley

    But seriously, I always argue that type classes are nothing extremely impressive for a dynamically typed programmer, certainly not for one who knows about and loves CLOS, for example.

    Type classes in Haskell just bring some bits of similar expressiveness to a strongly typed setting.

  • User profile image

    ... bringing some bits of [LISP] expressiveness to a [statically] typed setting.


    That about sums it up for the last two or three decades of mainstream language development. Smiley


    Yes, of course (static) types are a huge difference comparing to run-time orientation of CLisp. One thing all comparers of CLOS and OO would always point out was the multi-dispatch capability of defmethod - and yet here you've shown how to achieve that, even at compile-time through the type resolution of Haskell!


    Finding such parallels really helps to clarify concepts, and to bring dicipline even into coding under permissive languages. After all, there's nothing that can't be expressed in a bit of ASM. Smiley It's the insight that we're really after in CS, I think.


    Thank you again for the great lectures!

Add Your 2 Cents