Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

billh billh call -141
  • Louis Lafreniere - VC++ backend compiler

    Again, great video. More! You should interview some assembly language people...I would like to hear about the differences and changes over the years in the Pentium architecture and how your teams have adapted to that on very low levels. You kind of hit on that a bit with the multicore discussion here. I've thought a lot about getting back into some assembly programming just for fun (I did a fair amount of it back in the days of the 6502 chips), but am wondering how easy that will be considering the optimization that occurs on the chip itself, the caches, etc.

    Question: how do you target your compiler for different Pentium architectures? From what I remember, Intel seems to alter a few instructions with every generation (from the Pentium to the Pentium II, on up to the current ones). Does your compiler recognize the user's chip and pick the best optimization? How about for programs that are shipped? How do those recognize the user's chip? Or do you not take advantage of the latest additions made by Intel?

    Unfortunately, I do not own a copy of Visual Studio, so maybe those are options in the IDE, I don't know.

  • Windows, Part II - Dave Probert

    Great video. This brings back memories of the days when I was poring over manuals and tinkering with different sector interleaving schemes on the Apple II. Same timing/latency issues back then: waiting for the information to pass under the read/write head.

  • Jim Hogg: Phoenix Framework

    Jim, you rock! Thanks for taking time out to reply to all of my questions. Smiley

    Charles, thanks again for this video. I think you need to interview compiler people more often.

    :O

    I love studying compilers (although I still don't have a total handle on yac, bison, etc.). I'll experiment when I get some free time with other types of parsers. It's hard to explain what I am envisioning in a parser without some pictoral explanations or a working demo.

    I'm sure I'll come up with more questions, and maybe eventually I'll get around to downloading Phoenix.

    Thanks again!

  • Jim Hogg: Phoenix Framework

    Question time!*

    1) Wouldn't it be possible to check for buffer overflows on the front end of the compiler? Maybe somewhere between the lexer/parser stages and the backend? Hopefully that is done before the optimization phase.

    2) Any thoughts about running Phoenix itself through the Phoenix compiler?

    3) How does it handle hand-optimized assembly code in the C++? I know that isn't done all that often anymore, but it does happen.

    4) In theory, you could pretty much target any processor you want (not just x86 related ones). All you'd have to do is make the compiler emit its machine code into a text file and then take that file to whatever system you want. Er, right? While you're at, do it for the 6502.

    5) His diagram threw me off a bit. If the .NET code is run by the JIT part of Phoenix, it would not produce a machine excecutable, correct? I hope I phrased that right.

    6) I sensed there was some sort of "reverse engineering" ability with it? Is that a correct assessment? I thought at one point in the video he talked about taking a binary executable as input. If that is the case, does it backtrack to the point where it will crank out C++ code given a particular binary executable for input? Isn't that opening up a whole Pandora's box if people start reverse engineering everything in sight?

    7) I have not done assembly language level optimization in a long, long, long time (like the 6502 days). I have a mediocre handle on x86 assembly (and can figure it out if I'm asked to) but the way the Pentium is put together internally is sort of goofy...at least the way the registers were sort of "added on to" over the years in terms of bits. Is that the case with the multi-core processors, too? Sort of like multiplying that several times over? I know the how/why of the register design additions over the years, but I can't imagine having to write assembly for a multi-core system.

    8) I think the way parsers work is rather archaic, but that's just me. It seems incredibly inefficient to process code one character at a time (and then string them together into tokens, and then compare those tokens to predefined grammar, and then...). Any thoughts about changing that in the future? I have some ideas on how to do it, and if I find the time I might start messing around with that.

    * yes, I watched the video

  • WPF Imaging

    Thanks for this video, Charles...these are the kind of videos that make me reconsider applying to Microsoft (yes, I admit, I've done it before, so sue me :O ).  I love looking at and putzing around with file formats, and know a little about the various image formats out there.

    With regards to "anti-fuzzing", my concern would extend well beyond image files, though...are you doing this for .wav formats, media formats and other types of files?  I know with video streams the Media Player will usually balk in some way (with some type of "corrupted file" error dialog).  It is ridiculously easy to come up with your own file formats, and as an extension of that, fiddle with the ones that are out there now.

    Edit: Are there any updated graphics format pages anywhere on the internet? Here is a page of older formats if anybody is technically curious (circa 1997):

    http://www.dcs.ed.ac.uk/home/mxr/gfx/2d-hi.html
  • Virtual Earth Team - Streetlevel

    Thanks! I'll have to look at this one later when I get a chance.

    P.S. I just wanted to be the first one to post here (instead of Zeo). Big Smile

    Edit: Well, I wasn't the first...but neither was Zeo. Tongue Out
  • Seventeen Minutes With Bill

    SlackmasterK wrote:
    Bill commented about how the web still has alot of advances to go.  For instance, shopping Amazon in a 3D style interface, walking down the aisles.  Personally, I think the biggest obstruction to this goal is:  When you've got hundreds of millions of titles, how practical is this, really?


    Did somebody say "Chrome"? Wink

    It would be practical if Amazon was willing to deploy some type of desktop application. On any given screen of Amazon, it usually only holds about ten titles (down the center of the screen).  Now, if you expand that into a 3-D book aisle, it would be a matter of piping the title information to the client app. I don't know if it would be wise to use a browser for that or not.  If you think about an aisle full of books in a typical library, for instance, all you usually see on the spines of the books are titles. Pictures only become an issue when you slide a title out and look at the cover. What would be nice is if you could walk down a virtual aisle of books and have Amazon populate the titles with a) things similar to what you have bought before, or b) random suggestions that might lead you to new topics.

    You know what? You've just given me a great idea for something I'll do here on Channel 9.  Give me few days and I'll see if I can code something quickly to show you what I have in mind. I'm not familiar with DirectX at all, so the interface might be a little poky until I figure out something better. I was going to do another "collage" thing, but I've got a much better idea.
  • Seventeen Minutes With Bill

    If I was Mr. Gates, I'd sign up for a Channel 9 account with a completely unrelated screen name, so that he could post his comments "incognito".

    I could give him advice on how to do this.

    :O

  • Life and Times of Anders Hejlsberg

    staceyw wrote:
    "I still contend that we can do better than the current set of data structures out there.  The concept of a linked list is inherently flawed no matter where you stick the pointers. "

    How so?


    It's a long story...but I will post something about this soon.  I'm quite busy right now with other things, but that should change soon. It will take me several days to assemble a post about this topic, and to put together a prototype/demo. It will be written up in C++. 

    The very short version is this: the linked list (single or double) is somewhat primitive in its design.  It does not have to be this way.  It is rather odd that the only means of traversing between nodes is via connected pointers.  That forces a user to traverse a list, node by node, and when the list is long enough, that is time consuming.  So, I'm going to build a hybrid between an array (or vector) and a linked list.

    And then show you how to flip that into a completely different data structure in real time without moving any data around. The question is not whether it can be done, but how fast I can get it to work.

    Then, when I'm done with that, I'll put up a "data structure" builder/designer in the Sandbox. 

    Edit: Tenative "early" thoughts here (subject to great changes in the weeks ahead).
  • Life and Times of Anders Hejlsberg

    About time we saw another Anders video! Thanks. Big Smile

    Anders, you're my hero.  Even if you never read my posts.

    I still contend that we can do better than the current set of data structures out there.  The concept of a linked list is inherently flawed no matter where you stick the pointers.  Again, as I've said before, I'll post a prototype of what I have in mind within the next few weeks.

    Side note: Why do these types of videos (much like an informercial) always have to cut to shots of the audience nodding their heads? It's so cheesy.