Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

pierreleclercq pierrelecle​rcq
  • Microsoft Platform Vision in the Post Bill Era: Meet Craig Mundie

    A question for Chaz,

    is it possible to upload images for the posts?
  • Microsoft Platform Vision in the Post Bill Era: Meet Craig Mundie


    Speaking about cathedrals:



    Isn't it cute? Big Smile

    My opinion is I "massively" love those beauties.
    It takes an average 3 centuries for this type of work,
    so the people who started this did not get a chance
    to see it done. For sure, no one could have claimed
    ownership of the job...


    I wonder how a gantt diagram would have looked
    for this project, had it been invented Smiley




  • Microsoft Platform Vision in the Post Bill Era: Meet Craig Mundie


    Speaking of formal composition, it looks like this idea has been used more
    and more over the past few years. One might see this by taking a look at how
    the various SDKs have been built upon each others. From a conceptual point
    of view this is simple, and easy to deal with, but unlike mechanical engineering,
    where this is perfectly ok to build a simply layered model of a bridge and then
    build the bridge, a piece of software might encounter performance issues with
    this type of approach. In the early 90's Windows NT was laying out a great and
    clean layered architecture, but the following versions had to break and merge
    some parts of the layers for performance reasons. An example of this is how the
    video drivers were re-architectured. So a simple and formal composition model
    will certainly be more and more in use, but it will certainly have to break
    from time to time to avoid weight overload.

    Speaking of both concurrency and formal composition, one could remember
    ACTOR's models, and how neat clean and simple they were. Everything is an
    actor and then you build upon this. But for real applications this model
    was a little hard to use without a few changes. Even LISP the language were
    affectation was not supposed to be used, eventually introduced the setq syntax.

    The DNA architecture introduced a very sexy way of looking at software systems.
    Let's get some inspiration from biological systems. When a stimulus comes in
    it is normally processed through the central nervous system. This central system
    built using formal composition encompasses layers and layers of data processing.
    This takes a while to be processed, and when the response is ready, the biological
    system has to hope the environment has not changed too much. Some dinosaurs, cold
    blooded could have a small beast feed on their tail, as it would take about a minute
    for the nervous influx to propagate to the central brain. So some of those dinosaurs
    developed an intermediate brain at the bottom of their back that would preprocess
    incoming data, and, for example, would order a 'shake tail' when a bite feeling was
    coming in.
    Our brain also has some similar mechanisms, when some unrecognized pattern comes in
    it is processed by higher level, and slow (and much layered) parts of the brain. If
    an answer is found, and this pattern shows up several times, a couple stimulus / reflex
    is stored in the reptilian brain.

    I am a big fan of the dotnet framework, and started using it a soon as it became
    commercially available. But at the beginning, a few thing looked strange, until
    I figured out the whole framework was another layer on top of a number of existing ones.

    The feeling I have today is after having been a regular C-style procedural type of culture,
    at MS, some kind of internal revolution occured driven by academical formal approaches.
    I would be the first to salute this switch, and understand why this is happening, but
    I'd remain cautious about how some steps could have been ignored, and would not forget
    MS is building real-world applications, not academical try outs.


     

  • Carnegie Mellon Robotics Lab


    Organizational research: the feedback loop model. (CS-CE oriented)

    Let's acknowledge the fact a pipeline model, although useful, and simple, does not necessarily provide the most extensive ways to find innovative ideas and turn them into products.

    Great products are often built on an initial good idea, but require constant seeding to keep evolving. A lot of input comes from marketing feed, and it is necessary. On the other hand product development, in itself, generates a lot of side technology which is usually not exploited due to time/budget constraints, and simple irrelevance to the priorities at hand.

    Research is not necessarily product driven, and the goal are more about developing concepts and building proof of concepts.

    The classical pipeline model usually relies on hypothetical communication between those two layers, each with distinct goals. The idea would be to come up with an organization that would provide a tighter integration, in order to not only feed research into development, but also feed back some development ideas into research.

    The process would be split into three different teams. The first team would look like a conventional research team, in charge of exploring ideas, and building proof of concepts. The second team would look like a product team, but on a lightweight scale. It would monitor the work of the research team, and identify meaningful ideas they will productize. As a lightweight development team, it will follow a development process, where the goal would not be to actually kick something out the door, but rather to provide development-oriented insights that will be fed back into the research team. Just as in a neural net, you would then witness an exploration process where convergence is driven by a feedback loop. The third team would have the goal of monitoring the two other teams, possibly providing some higher level feedback, but most importantly identifying short term products and feeding them into the marketing and engineering teams. Another goal for this team could be to facilitate technology transfers between this organization and engineering.

    As a whole those three teams would remain a research organization, but could provide a strong interleave of engineering and research skills and knowledge.

    One could question the amount of resources needed for funding such an organization, which is, in itself, an organizational research project, but the pipeline model although simple has shown limits. It would be easy to name large companies which have faced the situation where despite having funded massive research efforts, and actually having found great concepts, never turned them into real-life products. Then, the amount of resources used is quite large, even for a pipeline model.

    Pierre Leclercq

  • Louis Lafreniere - VC++ backend compiler

    Oops, sorry for the double message. Can be edited out?
  • Louis Lafreniere - VC++ backend compiler

    louisl wrote:


    As far as runtime detection of the architecture we run on, the CRT does look at it and take advantages of the SSE/SSE2 instruction when available to speed up some computations, and to move larger chunks of memory at a time.  The generated code from the compiler doesn't do this however.  Doing so would cause a lot of code duplication and our experience has showed that code size is very important for medium to large apps.

    -- Louis Lafreniere


    How interesting. We could think the JIT should be able to take
    advantage of runtime detection of the hardware to generate code
    specific to the current processor. Still, as Brandon Bray was pointing
    out the JIT has time constraints stricter than for a regular
    compiler, and therefore cannot spend too much time optimizing.
    One could also wonder how this would impact performance in
    general, as most of the time the difference should be small. (?)

    Are these considerations part of the Phoenix project?

  • Louis Lafreniere - VC++ backend compiler

    louisl wrote:


    As far as runtime detection of the architecture we run on, the CRT does look at it and take advantages of the SSE/SSE2 instruction when available to speed up some computations, and to move larger chunks of memory at a time.  The generated code from the compiler doesn't do this however.  Doing so would cause a lot of code duplication and our experience has showed that code size is very important for medium to large apps.

    -- Louis Lafreniere


    How interesting. We could think the JIT should be able to take
    advantage of runtime detection of the hardware to generate code
    specific to the current processor. Still, as Brandon Bray was pointing
    out the JIT has time constraints stricter than for a regular
    compiler, and therefore cannot spend too much time optimizing.
    One could also wonder how this would impact performance in
    general, as most of the time the difference should be small. (?)

    Are these considerations part of the Phoenix project?

  • Life and Times of Anders Hejlsberg


    Many programmers certainly have fond memories of the yellow
    and blue IDE, where drop down menus were made of characters.

    I am curious of how the C# team will face the challenge of
    a growing language. Someone in the assistance mentionned
    the creeping of functionnal programming inside C#. For example,
    LISP initially designed to be very simple and homogeneous,
    has then evolved into a 1200 pages standard. And this standard,
    besides being "functional-oriented", had imperative constructs,
    and object oriented constructs. C++ starting from its low-level
    origins also has grown into a thousand pages standard.
    The C# team actually did a great job at designing a homogeneous
    language, but the idea of providing one path for one problem
    will have a hard time living through the growth of the language.
    As eventually there always comes a situation where the programmer
    needs freedom, the choice might be between spawning new simple
    languages, or keeping on growing. So far the number of available
    languages, only for MS gives a clue of the extent of the problem.
    (And this is not a negative comment).
    Back in the early 80's the DOD realized they had tens of programming
    languages being used internally, so they decided to come up
    with a unification aka ADA. But that did not remove the need for
    various types of languages. In the 60's AI was supposed to be
    implemented before the end of the century. And declarative programming
    has been a promise since the 40's.
    I think the advances with XML are really great, but as someone wrote in
    another post, it might be a good idea to have a declarative layer,
    and an imperative/object oriented layer.
    So hurry slowly toward fully declarative languages.

     

    Also there is much greatness in .net. Although C# and .net are tightly
    coupled, programming for .net provides a very homogeneous set of
    programming experiences for all the supported languages. This
    factorization favors improvements across a variety of languages,
    each with its own flavor, but iteratively improving each others.
    Looks like a powerful leverage for evolution...
    (And still providing lots of freedom for specific classes of problems)

    This eco-system oriented growth of the languages is IMHO something
    where Java definitely fell short... Well I mean the initial idea,
    as there is a J#.net Smiley

    As Anders said: Keep inventing...

  • Louis Lafreniere - VC++ backend compiler


    Great interview!

    Concerning the ia64 architecture, there was a mention saying
    the compiler had to do more of the smart to optimize code layout.
    So what would be the reasoning for this change? Is this about
    making the architecture simpler? (Assuming it's more complex
    on other aspects).

    Also appreciate a lot the improvements in back-end code generation for VC++. This is nice to see a video like this, as there are good
    surprises in code generation that we could only discover by
    stepping through the disassembler window.

    Additions to the language, or new libraries change the way we
    write code, but discovering new optimizations really gives a
    different perspective. For example, the removal of the copy of an
     object being returned from a function allows the writing of code
    that will do much more use of automatic variables (and therefore
    will release a lot from pointer management).

    I guess that someone who was writing C++ code, 10 or 15 years
    ago, and now still doing so, would certainly have the feeling he/she
    is using a different language, even though it's still C++.

    By the way, as more developers get familiar with c# coding style,
    it may be that more and more C++ classes could be written in a
    header, rather than the usual .h/.cpp pair. If a Visual studio guy
    reads this, this would be nice to factor this into the smart indent.