Andrew Davey

Back to Profile: Andrew Davey

Comments

  • Brian Beckman: Monads, Monoids, and Mort

    John Melville, MD wrote:
    
    In C# 3.0 I think you can build 95% of this with extension methods, like this:

    namespace DoDynamic {
        public static class DoDynamic
        {
            public static object call (this object s, string name,
    param object[] parameters) { // find and call the function using reflection } } }




    Your sample becomes:
    void Test()
    {
      object foo = GetData(); // "late" is a new, psuedo-type, keyword.
      foo.Call("DoSomething");
    }.



    In adddition to being doable with the next c# as it is currently specified this syntax also makes the dynamic nature of each
    call plainly obvious, and it is possible to chose dynamic or static on a per call, rather than a per-object basis.

    Not exactly what you asked for, but pretty close and already working (in beta builds.)







    That's a good compromise for now I guess. I could also then add "Get" and "Set" extension methods for properties...
  • Brian Beckman: Monads, Monoids, and Mort

    androidi wrote:
    
    Andrew Davey wrote: 
    So in C# I'd love to see:
    void Test()
    {
      late foo = GetData(); // "late" is a new, psuedo-type, keyword.
      foo.DoSomething();
    }


    How is this different from how LINQ uses C# 3.x for example? It's not RTM yet which is too bad yeah.


    Er.. LINQ stuff is statically typed. Here I'm talking about late binding (i.e. at runtime).
  • Brian Beckman: Monads, Monoids, and Mort

    Do people see VB's inherent verbosity getting the in way of implementing features, like lambda expressions, in way that won't confuse Mort?

    In C# 3.0:
      list.Where(x => x == 42)
    is nice and succint.

    I've not been able to find a VB 9 version of that yet. The VB future's website still uses "AddressOf" to a seperate function.
    Does anyone know what the anonymous method/closure/lambda expression syntax looks like in VB 9?

  • Brian Beckman: Monads, Monoids, and Mort

    Adding late binding to C# could be done easily if they copied Boo. In Boo you can declare a variable "as duck". Then any references to members on that variable are late-bound. I like the approach because it makes the declaration explicit.

    def Foo():
        x as duck = GetSomething()
        print x.Bar()

    Note that it's duck as in "duck typing".

    Where it gets more awesome is if you implement the IQuackFu interface on a class. This interface defines three methods: QuackInvoke, QuackGet and QuackSet. So when calling code invokes any member on your class, the compiler actually makes it call the dispatcher methods of the interface, passing the member name and arguments.
    This basically means you can do funky stuff like add methods at runtime to a class. See http://docs.codehaus.org/pages/viewpage.action?pageId=13653 for a cool dynamic mixin example.

    Adding this feature to C# would not in any way affect normal early-bound code. People are free to ignore the feature, but it's there if they really need it (e.g. with COM interop).
    Adding support for something like IQuackFu (changing to a less silly name too I bet!) would actually surpass VB's dynamic abilities.

    So in C# I'd love to see:
    void Test()
    {
      late foo = GetData(); // "late" is a new, psuedo-type, keyword.
      foo.DoSomething();
    }

  • Windows Marketplace: Write a Windows app. We'll sell it for you.

    I don't think they said that it is a requirement... Just that if you do have it, then it's flagged up. The idea being a user can tell if the app has been logo certified.
  • Programming in the Age of Concurrency: The Accelerator Project

    Minh wrote:
    I'm curious - why not implement this as a library on top of multi - core CPU's (which seems  a much  moreuseful Scenario) rather than a GPU ?

    (or perhaps You find the limited Ps instruction set easier to start out with)

    Parallel data and parallel instructions are two different beasts I guess. Trying to operate on a single dataset from multiple processors causes all kinds of memory/cache issues. When you can split the data up and work independently then its fine. However when you can't, the only performant way to operate is in one processor. In this case taking advantage of the data parallelism inside a single GPU.
    Of course, I'm not an expert by any means in this area... hopefully the boffins at MSR are finding clever solutions to these tricky problems.
  • Programming in the Age of Concurrency: The Accelerator Project

    They are using Managed DirectX. So that takes care of talking to the video card for them.
  • Programming in the Age of Concurrency: The Accelerator Project

    What happens for those lucky people with dual video cards? Can Accelerator use both in parallel? Big Smile

    (No I don't have dual cards, I just like the idea!)
  • Programming in the Age of Concurrency: The Accelerator Project

    I wonder if I can justify a shiny new graphics card under the guise of "research" Wink
  • Programming in the Age of Concurrency: The Accelerator Project

    Data parallelism for big numerical problems is kind of obvious. I think the next challenge is bringing parallelism to regular business apps. For example, if I have a list of business objects and want to validate them all, or maybe check for changes against a web service, doing a simple "foreach" loop is dumb when I have 2 or more CPUs. Maybe one day we will languages and compilers smart enough to just express "validate all these objects" and have it work out the most efficient way to do it...
  • Programming in the Age of Concurrency: The Accelerator Project

    Even if the SDK has no source it's managed code, so you can get Reflector in there and have a snoop around Tongue Out
  • Programming in the Age of Concurrency: The Accelerator Project

    Defered evalution is definately a very interesting subject.  The work being done with LINQ is along the same lines. The compiler generates an expression tree that can then be passed around as data and transformed before evaluation. 
    I wonder if its possible to take expression trees generated by LINQ and transform them into parallelisable computations. I suppose it really comes down to "map" and "reduce" functions in the end. Whilst you are kind of limited to pure arithmetic operations in the GPU, the future of multi-cores certainly could widen the scope.

    Of course, I can't talk about abstract syntax trees without once again mentioning syntactic macros Wink It would be interesting to look at using syntactic macros to perform staged computation. I'm sure some of the parallelising of operations can be decided at compile time. That could make for even more performance increases since you can take some weight off the JIT compiler. Big Smile

    Anyway... Awesome work and great video Charles.
    BTW: Charles, you need to get a secondary job at MSR being "social glue"! We need to get all these academics down to the bar to mix their ideas.