Tech Off Thread

131 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

Ideas for C# 5.0

Back to Forum: Tech Off
  • User profile image
    stevo_

    vesuvius said:
    exoteric said:
    *snip*

    The real performance benefits from this [null checks] come in at the MSIL level, unfortunately, so spec# etc. is the only way to go. None of this namby-pamby, prittle-pattle, tittle-tattle type nonesense.

     

    With regard to dependency injection, I think Prism is leading the way here, and from a library point of view, it makes sense and the team has really done a fantastic job. If you gave me the cash and the staff to build a high-class, modern application, Prism would be at the top of my list.

     

    With regard to extension methods, you could pick a handful of your favorite static libraries, MessageBox.Show, Regex even Code Contracts and add them to your list. This will widen the scope for method name clashes, so i'm sure the BCL team will resist this somewhat.

     

    Thats not to say there isn't some perspicacious thinking going on here, because there is.

    By prism you mean unity? unity is ok but its really an entry level ioc.. in terms of pure di I think windsor and MEF are worth looking at.

     

    Windsor is a much more mature and capable ioc, and mef is more focused on different types of di.

  • User profile image
    Bass

    exoteric said:
    vesuvius said:
    *snip*

    I'm quite curious as to how you'd create a more efficient representation than a struct wrapping a reference type, please elaborate.

     

    Actually this is not about performance at all, it is about correctness.

     

    I'm not very familiar with Spec# but believe it has syntax support for non-nullability. Does Spec# not use the off-the-shelf CLR?

     

    As to extension methods, that's a different thread, this is about C# 5.0 ideas (but would welcome any discussion about anti-patterns there; I don't think Contract makes sense as an extension method cluster at all; it does make sense, however, to lift the methods out to be accessible without name qualification; MessageBox.Show, I don't know about that one; - but let's have that discussion in the other thread).

    I like your idea. I'd extend it to say I think .NET should have used non-nullable types by default. But it's probably too late for that. Sad

  • User profile image
    exoteric

    Bass said:
    exoteric said:
    *snip*

    I like your idea. I'd extend it to say I think .NET should have used non-nullable types by default. But it's probably too late for that. Sad

    It's the same with mutable vs immutable - we're finding that the defaults are wrong

     

    - mutable vs immutable by default

    - nullable vs unnullable by default

    - imperative vs declarative by default

     

    really, all these choices steer you toward the functional paradigm but not necessarily without object-orientation (Scala, and less so, F#)

     

    but it's actually not too late to reverse the situation in C# - consider this

     

    val x = 3; // immutable by default - type-inferred; doesn't imply out-lawing "var" but does begin a movement towards a saner default (well, actually, within a single method, if side-effects are constrained to it, supposedly it doesn't matter for purity; still, val would be nice; and this doesn't mean transitively immutable - each type would still need to be implemented as immutable)

     

    immutable unnullable class Person ... // self-evident, if bloated - but bloat at definition site is better than bloat at use site

     

    or -

     

    invariant class Person ... // as above

     

    or -

     

    immutable class Person! ... // as above

     

    Strip away some paths of expression (implicit nullability, implicit mutability, implicit side-effects) and you give the compiler greater (or at least easier access to) freedom to do radical program transformation. I'd like to think that's where we're going...

     

    One thing is reversing the present. Another is reversing the (sins of the) past - also known as the .Net Framework... But that reversal probably begins with a more principled C#.

  • User profile image
    exoteric

    stevo_ said:
    vesuvius said:
    *snip*

    By prism you mean unity? unity is ok but its really an entry level ioc.. in terms of pure di I think windsor and MEF are worth looking at.

     

    Windsor is a much more mature and capable ioc, and mef is more focused on different types of di.

    Is there no one DI/F to rule them all? If we have several DI/Fs will they compose? Is this a sustainable path?

  • User profile image
    stevo_

    exoteric said:
    stevo_ said:
    *snip*

    Is there no one DI/F to rule them all? If we have several DI/Fs will they compose? Is this a sustainable path?

    Like anything I think its probably a good and realistic thing that there isn't a single great one, windsor is pretty solid though.. but it focuses more on the ioc aspects.. in terms of raw dependency injection.. ie- I have a service A, which requires service B.. then MEF is offers more choices of resolution aspects.

     

    You could well today use both mef and something like windsor but I don't think they would align exactly.. I think that MEF may gain more ioc like features in upcoming releases, and become something close to a 'fantastic all rounder' for awhile.

     

    These days I use windsor, but I don't really think any ioc frameworks are really meeting a modern requirement.. mef has stable composition which I think should be something iocs should look into providing as an option.

  • User profile image
    Bass

    exoteric said:
    Bass said:
    *snip*

    It's the same with mutable vs immutable - we're finding that the defaults are wrong

     

    - mutable vs immutable by default

    - nullable vs unnullable by default

    - imperative vs declarative by default

     

    really, all these choices steer you toward the functional paradigm but not necessarily without object-orientation (Scala, and less so, F#)

     

    but it's actually not too late to reverse the situation in C# - consider this

     

    val x = 3; // immutable by default - type-inferred; doesn't imply out-lawing "var" but does begin a movement towards a saner default (well, actually, within a single method, if side-effects are constrained to it, supposedly it doesn't matter for purity; still, val would be nice; and this doesn't mean transitively immutable - each type would still need to be implemented as immutable)

     

    immutable unnullable class Person ... // self-evident, if bloated - but bloat at definition site is better than bloat at use site

     

    or -

     

    invariant class Person ... // as above

     

    or -

     

    immutable class Person! ... // as above

     

    Strip away some paths of expression (implicit nullability, implicit mutability, implicit side-effects) and you give the compiler greater (or at least easier access to) freedom to do radical program transformation. I'd like to think that's where we're going...

     

    One thing is reversing the present. Another is reversing the (sins of the) past - also known as the .Net Framework... But that reversal probably begins with a more principled C#.

    Is there any actual compilers against functional language that can do analytical optimizations?

  • User profile image
    exoteric

    Bass said:
    exoteric said:
    *snip*

    Is there any actual compilers against functional language that can do analytical optimizations?

    Not quite sure what you mean but this work impresses me

     

    http://www.cse.unsw.edu.au/~dons/papers/CLS07.html 

    This paper presents an automatic fusion system, stream fusion, based on equational transformations, that fuses a wider range of functions than existing short-cut fusion systems. In particular, stream fusion is able to fuse zips, left folds and functions over nested lists, including list comprehensions. A distinguishing feature of the stream fusion framework is its simplicity: by transforming list functions to expose their structure, intermediate values are eliminated by general purpose compiler optimisations.
    We have reimplemented the entire Haskell standard list library on top of our framework, providing stream fusion for Haskell lists. By allowing a wider range of functions to fuse, we see an increase in the number of occurrences of fusion in typical Haskell programs. We present benchmarks documenting time and space improvements.

  • User profile image
    W3bbo

    exoteric said:
    Bass said:
    *snip*

    Not quite sure what you mean but this work impresses me

     

    http://www.cse.unsw.edu.au/~dons/papers/CLS07.html 

    This paper presents an automatic fusion system, stream fusion, based on equational transformations, that fuses a wider range of functions than existing short-cut fusion systems. In particular, stream fusion is able to fuse zips, left folds and functions over nested lists, including list comprehensions. A distinguishing feature of the stream fusion framework is its simplicity: by transforming list functions to expose their structure, intermediate values are eliminated by general purpose compiler optimisations.
    We have reimplemented the entire Haskell standard list library on top of our framework, providing stream fusion for Haskell lists. By allowing a wider range of functions to fuse, we see an increase in the number of occurrences of fusion in typical Haskell programs. We present benchmarks documenting time and space improvements.

    I'd like to propose an === operator, or "is identical to" operator, unlike == it isn't overridable (so it's good for ensuring reference equality when working with types that override == without manually calling Object.ReferenceEquals). It'd also work on structs where == is a value comparison and not a reference comparion by default.

  • User profile image
    Frank Hileman

    I would like to see features to make it easier to create data types with value semantics. This includes both classes (reference types) and structs. Currently there is a lot of unnecessary work between Equals, IEquatable, IClonable, GetHashCode, operator==, etc, with most of the work simple stuff that could be automated. For example, by default, you probably want equality and assignment to be defined as equality and assignment of fields, so there should be a way to get that automatically, and declaratively handle exceptions via attributes.

     

    Switching from a class to a struct, or vice versa, is painful, for no reason.

     

    I should not have to choose between class or struct in order to be able to choose between heap or stack allocation. Heap or stack allocation should be a decision of the user of a data type, not the implementation of the data type. That is a more extreme and probably not possible without breaking c#, because we need syntax to indicate what is a reference, and what is a value, as in C++.

     

    We need the notion of const-ness and const-correctness that can carry through, so we know what is immutable at a deep level. Basically, we need a more mathematically oriented language.

     

    I would like to see a JIT compiler that can perform as well as Java's hot-spot compiler on high performance computing applications.

  • User profile image
    Bass

    exoteric said:
    Bass said:
    *snip*

    Not quite sure what you mean but this work impresses me

     

    http://www.cse.unsw.edu.au/~dons/papers/CLS07.html 

    This paper presents an automatic fusion system, stream fusion, based on equational transformations, that fuses a wider range of functions than existing short-cut fusion systems. In particular, stream fusion is able to fuse zips, left folds and functions over nested lists, including list comprehensions. A distinguishing feature of the stream fusion framework is its simplicity: by transforming list functions to expose their structure, intermediate values are eliminated by general purpose compiler optimisations.
    We have reimplemented the entire Haskell standard list library on top of our framework, providing stream fusion for Haskell lists. By allowing a wider range of functions to fuse, we see an increase in the number of occurrences of fusion in typical Haskell programs. We present benchmarks documenting time and space improvements.

    Vala is a language inspired by C# which does not allow null reference types without explictly idenitfying them as nullable.

     

    eg:

     

    string foo()
    {
       return null;
    }
    
       

     

    won't compile

     

    string? foo()
    {
       return null;
    }
    
       

     

    would.

     

    This will alert the caller on the possibility that function can return null, and thus the programmer can take the needed precautions.

     

    C# could use similar syntax. Hopefully one day we can get rid of NullReferenceExceptions for good. They do not belong in an managed language IMO.

  • User profile image
    exoteric

    Frank Hileman said:

    I would like to see features to make it easier to create data types with value semantics. This includes both classes (reference types) and structs. Currently there is a lot of unnecessary work between Equals, IEquatable, IClonable, GetHashCode, operator==, etc, with most of the work simple stuff that could be automated. For example, by default, you probably want equality and assignment to be defined as equality and assignment of fields, so there should be a way to get that automatically, and declaratively handle exceptions via attributes.

     

    Switching from a class to a struct, or vice versa, is painful, for no reason.

     

    I should not have to choose between class or struct in order to be able to choose between heap or stack allocation. Heap or stack allocation should be a decision of the user of a data type, not the implementation of the data type. That is a more extreme and probably not possible without breaking c#, because we need syntax to indicate what is a reference, and what is a value, as in C++.

     

    We need the notion of const-ness and const-correctness that can carry through, so we know what is immutable at a deep level. Basically, we need a more mathematically oriented language.

     

    I would like to see a JIT compiler that can perform as well as Java's hot-spot compiler on high performance computing applications.

    Actually the language D has very sophisticated (transitive) const semantics.

     

    In many cases I think defining const-ness or immutability is a definition-site problem. If a type is inherently mutable, I'm not so sure it makes sense to force a new immutable version of it. Types should be designed to work as immutable or mutable and the type-system should maybe help you track if you let an impure object flow through a method call (e.g. by forcing you to mark the return type as Impure<T>.

     

    For example, if you let in an impure object in an otherwise pure method and you then compute some result based on it - but accessing properties of the object changes the object itself (let's just say it's an evil object), then this means that the method is still pure in a way but if you then call the same method again with the same object, it will break - because technically it's not the same object anymore. So the method is no longer reflexively pure.

  • User profile image
    exoteric

    More about tuples. I see in .Net 4.0 beta 2 they are implemented as classes. How about this?

     

    public struct STuple<a>
    {
        private readonly a x;
        public a Item
        {
            get { return x; }
        }
        public STuple(a a)
        {
            x = a;
        }
    }
    public struct STuple<a, b>
    {
        private readonly a x;
        private readonly STuple<b> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b)
        {
            x = a;
            s = new STuple<b>(b);
        }
    }
    public struct STuple<a, b, c>
    {
        private readonly a x;
        private readonly STuple<b, c> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b, c> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b, c c)
        {
            x = a;
            s = new STuple<b, c>(b, c);
        }
    }
    public struct STuple<a, b, c, d>
    {
        private readonly a x;
        private readonly STuple<b, c, d> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b, c, d> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b, c c, d d)
        {
            x = a;
            s = new STuple<b, c, d>(b, c, d);
        }
    }
    public struct STuple<a, b, c, d, e>
    {
        private readonly a x;
        private readonly STuple<b, c, d, e> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b, c, d, e> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b, c c, d d, e e)
        {
            x = a;
            s = new STuple<b, c, d, e>(b, c, d, e);
        }
    }

     

    As you can see they are recursively constructed structs. Their definition is much like linked lists, only here defined statically as heterogeneous nested structs.

     

    It looks quite elegant and I'd expect more efficient for what tuples will often be used for than a class based implementation. There is no interface compatibility because C# forbids it. Also in this form the code is not FxCop compatible (lower-case type parameters and sins like that).

     

    Now how to make these semantics better? It would be cool if the following was possible

    • an (s)tuple of a higher dimension extends an (s)tuple of a lower dimension (structural matching between lower- and higher-dimensional (s)tuples)
    • an (s)tuple is an IEnumerable<dynamic> when heterogeneous
    • an (s)tuple is an IEnumerable<T> when homogeneous

    These ideas will require an extension of the generic type system for C#. For examples in C# there is no struct inheritance or  struct interface inheritance.

     

    Thoughts?

     

    [2nd revision]

  • User profile image
    Ion Todirel

    exoteric said:

    More about tuples. I see in .Net 4.0 beta 2 they are implemented as classes. How about this?

     

    public struct STuple<a>
    {
        private readonly a x;
        public a Item
        {
            get { return x; }
        }
        public STuple(a a)
        {
            x = a;
        }
    }
    public struct STuple<a, b>
    {
        private readonly a x;
        private readonly STuple<b> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b)
        {
            x = a;
            s = new STuple<b>(b);
        }
    }
    public struct STuple<a, b, c>
    {
        private readonly a x;
        private readonly STuple<b, c> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b, c> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b, c c)
        {
            x = a;
            s = new STuple<b, c>(b, c);
        }
    }
    public struct STuple<a, b, c, d>
    {
        private readonly a x;
        private readonly STuple<b, c, d> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b, c, d> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b, c c, d d)
        {
            x = a;
            s = new STuple<b, c, d>(b, c, d);
        }
    }
    public struct STuple<a, b, c, d, e>
    {
        private readonly a x;
        private readonly STuple<b, c, d, e> s;
        public a Item
        {
            get { return x; }
        }
        public STuple<b, c, d, e> Rest
        {
            get { return s; }
        }
        public STuple(a a, b b, c c, d d, e e)
        {
            x = a;
            s = new STuple<b, c, d, e>(b, c, d, e);
        }
    }

     

    As you can see they are recursively constructed structs. Their definition is much like linked lists, only here defined statically as heterogeneous nested structs.

     

    It looks quite elegant and I'd expect more efficient for what tuples will often be used for than a class based implementation. There is no interface compatibility because C# forbids it. Also in this form the code is not FxCop compatible (lower-case type parameters and sins like that).

     

    Now how to make these semantics better? It would be cool if the following was possible

    • an (s)tuple of a higher dimension extends an (s)tuple of a lower dimension (structural matching between lower- and higher-dimensional (s)tuples)
    • an (s)tuple is an IEnumerable<dynamic> when heterogeneous
    • an (s)tuple is an IEnumerable<T> when homogeneous

    These ideas will require an extension of the generic type system for C#. For examples in C# there is no struct inheritance or  struct interface inheritance.

     

    Thoughts?

     

    [2nd revision]

    how is a.Rest.Rest.Rest.Rest.Item more readable than a.Item4?

  • User profile image
    exoteric

    Ion Todirel said:
    exoteric said:
    *snip*

    how is a.Rest.Rest.Rest.Rest.Item more readable than a.Item4?

    That's just a way to structure the definition in a "compositional" way. One should not normally access the items in this way (given syntax support for tuples as mentioned earlier in wkempf). Although if you want to, it'd be easy to create getters that did this traversal. You could define properties such as Length and indexers such as Item ([]).

  • User profile image
    exoteric

    Ion Todirel said:
    exoteric said:
    *snip*

    how is a.Rest.Rest.Rest.Rest.Item more readable than a.Item4?

    To illustrate

    public struct STuple<a>
    {
        private readonly a x;
        public a Some
        {
            get { return x; }
        }
        public a this[int index]
        {
            get
            {
                if (index != 0)
                    throw new IndexOutOfRangeException();
                else
                    return x;
            }
        }
        public IEnumerator<a> GetEnumerator()
        {
            yield return Some;
        }
        public static int Index
        {
            get { return 0; }
        }
        public static int Length
        {
            get { return Index + 1; }
        }
        public STuple(a a)
        {
            x = a;
        }
    }
    public struct STuple<a, b>
    {
        private readonly a x;
        private readonly STuple<b> s;
        public a Some
        {
            get { return x; }
        }
        public STuple<b> Rest
        {
            get { return s; }
        }
        public dynamic this[int index]
        {
            get
            {
                if (index > Index || index < 0)
                    throw new IndexOutOfRangeException();
                else
                    return index == Index
                        ? Some as dynamic
                        : Rest.Some as dynamic;
            }
        }
        public IEnumerator<dynamic> GetEnumerator()
        {
            yield return Some;
            yield return Rest.Some;
        }
        public static int Index
        {
            // C# forbids: (Rest.Index + 1)
            get { return 1; }
        }
        public static int Length
        {
            get { return Index + 1; }
        }
        public STuple(a a, b b)
        {
            x = a;
            s = new STuple<b>(b);
        }
    }

    Example

    var a = new STuple<int>(3);
    var b = new STuple<int,int>(1,2);
    foreach (var x in a)
        Console.WriteLine(x);
    foreach (object x in b)
        Console.WriteLine(x);
    Console.ReadKey();
    

    If we could somehow express a version of GetEnumerator that only applies to a homogeneous (s)tuple, meaning (s)tuple where all type parameters are equal, then there would be no need for a dynamic formulation, esp. as this is statically known.

    public static class STupleExtensions
    {
        public static IEnumerator<a> GetEnumerator<a>(this STuple<a, a> s)
        {
            yield return s.Some;
            yield return s.Rest.Some;
        }
    }
    

    I realize though that this kind of trickery is probably not very high on any wishlist heh

  • User profile image
    stevo_

    Is it even ON the wishlist? can you explain why somebody would want this vs the .net tuple class? regardless of source naming guidelines Tongue Out.. (having type arg a and paramter a is really needlessly complicated).. personally I think you have a over obsession with functional style code.

     

    C# isn't classically functional, so trying to treat it so is really pretty abusive and it shows with how you'd need to use it.. if you want to go nuts on functional style, why not just pick a language that is based on functional foundations like F#?

     

    Your efforts would be much more rewarding there.

  • User profile image
    exoteric

    stevo_ said:

    Is it even ON the wishlist? can you explain why somebody would want this vs the .net tuple class? regardless of source naming guidelines Tongue Out.. (having type arg a and paramter a is really needlessly complicated).. personally I think you have a over obsession with functional style code.

     

    C# isn't classically functional, so trying to treat it so is really pretty abusive and it shows with how you'd need to use it.. if you want to go nuts on functional style, why not just pick a language that is based on functional foundations like F#?

     

    Your efforts would be much more rewarding there.

    There are several things to consider. (The why is explained previously but see the end of this post as well (the questions)).

     

    Types are shared across languages, imperative or declarative. The discussion here is about the formulation of tuples in .Net, so it is really language agnostic in a sense, it just so happens that C# is quite close to the lingua franca of .Net, MSIL. It could've been VB too for that matter. F# has a succint syntax but I'm not very convinced its fundamentally more functional than C# for example, given that C# now has lambdas and "monadic syntax". Both F# and C# are hybrid imperative/declarative/object-oriented languages; and F# has had to adapt to this as the .Net Framework is object-oriented (one possible interpretation of the dot in dot net?). So I don't believe a resistance to functional style C# is that meaningful in the larger scheme of things.

     

    The experimental code provided here is just to show how tuples could be defined in a more "compositional" way. There is disagreement as to whether this is a good thing. That's good  and certainly does not discourage me from experimenting - I find this both educational and fun. C# or F#, it really doesn't make that much difference to me. Types are shared - the fundamental means of expression also.

     

    So you should refocus your attention to the semantics: are tuples as sequences helpful or harmful; are tuples as structs vs classes helpful or harmful; are a "recursive" definition of tuples more helpful than a flat definition?

  • User profile image
    exoteric

    exoteric said:
    stevo_ said:
    *snip*

    There are several things to consider. (The why is explained previously but see the end of this post as well (the questions)).

     

    Types are shared across languages, imperative or declarative. The discussion here is about the formulation of tuples in .Net, so it is really language agnostic in a sense, it just so happens that C# is quite close to the lingua franca of .Net, MSIL. It could've been VB too for that matter. F# has a succint syntax but I'm not very convinced its fundamentally more functional than C# for example, given that C# now has lambdas and "monadic syntax". Both F# and C# are hybrid imperative/declarative/object-oriented languages; and F# has had to adapt to this as the .Net Framework is object-oriented (one possible interpretation of the dot in dot net?). So I don't believe a resistance to functional style C# is that meaningful in the larger scheme of things.

     

    The experimental code provided here is just to show how tuples could be defined in a more "compositional" way. There is disagreement as to whether this is a good thing. That's good  and certainly does not discourage me from experimenting - I find this both educational and fun. C# or F#, it really doesn't make that much difference to me. Types are shared - the fundamental means of expression also.

     

    So you should refocus your attention to the semantics: are tuples as sequences helpful or harmful; are tuples as structs vs classes helpful or harmful; are a "recursive" definition of tuples more helpful than a flat definition?

    So let me boil it down to its absolute undeniable essence

     

    STuple pros

    - structs; for performance reasons (makes sense for tuples as most tuples are less than 10 elements)

    - recursion; for composition (tuple.Rest rather than new Tuple<A,B>(tuple.Item1, tuple.Item2); to be fair: representation bias)

    - enumeration; for composition (use tuples directly as sequences which, intuitively they are)

     

    Tuple pros

    - classes; for nominal typing (: INominal<T>)

    - flatness; for ease of comprehension and simplicity

    - non-enumeration; unmistakeable distinctness, no accidental application of abstraction

     

    The tension here is somewhat like structural vs nominal typing. Is it a duck if it looks, walks and quacks like a duck or is it only a duck if it has duck painted over it?

     

    These are choices. Being aware that there are choices being made for you and what the trade-offs are, is useful. I'm sure there are good reasons for the current design - nevertheless it is interesting to examine other paths of expression

     

    Smiley

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.