Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Wizarr

Wizarr Wizarr

Niner since 2007

  • Stephan T. Lavavej: Digging into C++ Technical Report 1 (TR1)

    evildictaitor wrote:
    

    I think you should have a look at managed C++ - we have two keywords: new works the same as in normal C++ and gcnew tells the garbage collector that it is responsible for cleaning up the object. This allows you to use the sharedpointer semantics for C++ types, and use the garbage collector for the .NET components.



    You are correct if i wanted to use the shared pointer semantics today, but I drank the dotnet koolaid a long time ago Smiley  For the most part I was describing a theoritecal managed environment where you might not even need a gc, or at least be able to premark the memory for reclaim where a parallel gc would be possible without blocking all threads while it searches and reclaims memory.  The only times I use managed c++ is when I am importing native dlls into dotnet, only using it as a wrapper dll to managed code when it’s too hard to convert to PInvoke methods signatures.

  • Stephan T. Lavavej: Digging into C++ Technical Report 1 (TR1)

    evildictaitor wrote:
    
     

    Sure it's possible, but this problem is being arrived at from different perspectives. In dotNET pretty much everything lives on the heap. When you do a new object() or a new array[] construction, the item is built onto the heap, and the thing you are storing in your variable is a reference to the object. In C++ when you use a non-indirected structure (such as T value or vector<X> values) you are creating the object on the stack. This means that with C++ it is plausible for the compiler to know which objects need to be disposed as you leave scope, whereas in dotNET leaving scope is independent of the objects on the heap.

    This being said, deterministic garbage collection is possible for C#, but it is in general more expensive in terms of CPU cycles. Note that you can bypass the GC entirely by using unsafe code.



    I am very clear about everything living on the heap in a managed world.  What I am talking about is shifting the concept of a managed environment to include these shared pointers in a way that can have access to the heap, maybe a seperate heap, or the same one with ownership properties, saying that the regular variables not belonging to this shared pointer structure cant have access to this memory and be able to isolate the memory via the runtime.

    The advantage to this is that when you can isolate memory that the gc doesnt have to collect means faster performance at the cost of internal structure overhead to keep track of who owns what.  However, I think there is pleanty of proof stated out there that deterministic destruction always wins in performance.  Now sure, the gc has really shined over the years, but if you can more declaratively state to the .net environment that you can guarentee that memory will be destructed, there will be no need to use unsafe code.  This is a method where you can still code safely (both type safe and memory cleanup safe if you will).

    This idea mostly stems back from the other channel 9 videos on language development and them talking about how the dotnet framework could have been even better with dynamic language support without adding a DLR library on top.  This is an idea of changing the dotnet framework to incorporate new technologies of delclarative programming and inject them into dotnet.  Even with dotnet 3.5 we are still using the dotnet 2.0 runtime basically.  This will be basically a heavy internal structural change while keeping current gc intact for backwards compatibility.

    I noticed with the shared pointers, there is a heavy level of management done on part of the pointer, why cant we integrate the same level of management into dotnet?  Garbage collection was a very easy method at the time comparatively with integrating the concepts of shared pointers.  Maybe have a new keyword called "spnew" that replaces or adds to the "new" keyword where it automatically handles where it gets allocated from.  Then the type can be infered by the spnew operation so that is bypasses the gc but add attributes to tell the gc the scope for deletion.  I havent really looked too hard at how best you can describe this feature but I think you can have them coexists.

    Basically it comes down to this, if you dont care, just use the normal new method and you will have big brother (gc) come and clean up your mess, however if you want more control over performance you have to show the compiler through some declarative programming that it is safe and it will reference count for you internally and you get a speed boost.

    Another point to look at is composibility.  Forget current programming languages for the moment.  Is it not more honest if you can tell the compilier declaratively where all the points of destruction are symantically?  From that point of view c++ is actually being more honest about destruction from that one paradigm of using shared pointers.  I think dotnet needs to do the same thing.

    If we can make dotnet more honest, I think we will have more acceptance to things like Haskell and other functional programming languages, while at the same time, being able to have a hybrid version that can handle the problems of Haskell, such as passing around state and how it can handle it.

    So to conclude, these shared pointers are at the very beginning of what it is capable.  It is most easily added to an unmanaged language because of the lack of integration with type safety that dotnet has to provide.  However, I think the potential of shared pointers are just reaching the beginning of what is possible.

    As far as the clarifying to my comment about compiler optimization support.  I was just thinking outloud stating, that if the compiler was smart enough, then we dont need to declarative state that we want this referenced counted, it can infer it automatically if all certian conditions of isolation are met.  Like all other optimizations, we never tell the compiler to change our code but it does so if it thinks it can guarentee the same output but faster.
  • Stephan T. Lavavej: Digging into C++ Technical Report 1 (TR1)

    Not meaning to downplay how cool these additions sound to c++, but as a c# developer, it seriously begs the question.  Is it possible to enable a more declarative syntax to c# that allows you to create a special mode in the managed .net environment where you can bypass the garbage collection?  If you can prove to the complier that you will have deterministic destruction via these "shared pointers" that will be enforced by the runtime and reference counted I think that will lessen the burden of garbage collecting to those individual resources and speed up your code and be more memory efficient.

    I really like all the portability of MSIL being able to run managed code using mono, and having the JIT compile to either 32-bit or 64-bit on the fly.  However, one of my biggest annoyance with the managed environment is that there is currently no way to prove to the compiler or the runtime with verifiable proof that basically says, hey, look at this code, you can see that I am following the rules of this abstraction.  Let me setup this declarative optional feature that allows deterministic destruction of these resources and I will not only be more composable but my code will still bound by the .net type safety.  At the same time, this will also negate the need to have parallel garbage collection, which by the way, may never be solved without more declarative honesty built into the language.

    You have to get this guy talking with the guys from the Haskell camp with Erik Meijer and Gilad Bracha and maybe even talk with Anders Hejlsberg as a mediator discussing the future of dotnet and how to add declarative directives to the language and/or foundation and debate if it can improve both performance and composibility.  I think it will be a way to increase the honesty factor of dotnet as well.  I really want to see how they can mash this all out.

    As a side note, I think it's an interesting idea of creating a complier to determine certain code patterns and convert them automatically to be a more efficient managed reference pointing model while providing all the benefits of a managed language.  It could be an option on your compiler optimizer.