Again, great video. More! You should interview some assembly language people...I would like to hear about the differences and changes over the years in the Pentium architecture and how your teams have adapted to that on very low levels. You kind of hit on
that a bit with the multicore discussion here. I've thought a lot about getting back into some assembly programming just for fun (I did a fair amount of it back in the days of the 6502 chips), but am wondering how easy that will be considering the optimization
that occurs on the chip itself, the caches, etc.
Question: how do you target your compiler for different Pentium architectures? From what I remember, Intel seems to alter a few instructions with every generation (from the Pentium to the Pentium II, on up to the current ones). Does your compiler recognize
the user's chip and pick the best optimization? How about for programs that are shipped? How do those recognize the user's chip? Or do you not take advantage of the latest additions made by Intel?
Unfortunately, I do not own a copy of Visual Studio, so maybe those are options in the IDE, I don't know.
Great video. This brings back memories of the days when I was poring over manuals and tinkering with different sector interleaving schemes on the Apple II. Same timing/latency issues back then: waiting for the information to pass under the read/write head.
Jim, you rock! Thanks for taking time out to reply to all of my questions.
Charles, thanks again for this video. I think you need to interview compiler people more often.
I love studying compilers (although I still don't have a total handle on yac, bison, etc.). I'll experiment when I get some free time with other types of parsers. It's hard to explain what I am envisioning in a parser without some pictoral explanations or a
I'm sure I'll come up with more questions, and maybe eventually I'll get around to downloading Phoenix.
1) Wouldn't it be possible to check for buffer overflows on the front end of the compiler? Maybe somewhere between the lexer/parser stages and the backend? Hopefully that is done before the optimization phase.
2) Any thoughts about running Phoenix itself through the Phoenix compiler?
3) How does it handle hand-optimized assembly code in the C++? I know that isn't done all that often anymore, but it does happen.
4) In theory, you could pretty much target any processor you want (not just x86 related ones). All you'd have to do is make the compiler emit its machine code into a text file and then take that file to whatever system you want. Er, right? While you're at,
do it for the 6502.
5) His diagram threw me off a bit. If the .NET code is run by the JIT part of Phoenix, it would not produce a machine excecutable, correct? I hope I phrased that right.
6) I sensed there was some sort of "reverse engineering" ability with it? Is that a correct assessment? I thought at one point in the video he talked about taking a binary executable as
input. If that is the case, does it backtrack to the point where it will crank out C++ code given a particular binary executable for input? Isn't that opening up a whole Pandora's box if people start reverse engineering everything in sight?
7) I have not done assembly language level optimization in a long, long, long time (like the 6502 days). I have a mediocre handle on x86 assembly (and can figure it out if I'm asked to) but the way the Pentium is put together internally is sort of goofy...at
least the way the registers were sort of "added on to" over the years in terms of bits. Is that the case with the multi-core processors, too? Sort of like multiplying that several times over? I know the how/why of the register design additions over the years,
but I can't imagine having to write assembly for a multi-core system.
8) I think the way parsers work is rather archaic, but that's just me. It seems incredibly inefficient to process code one character at a time (and then string them together into tokens, and then compare those tokens to predefined grammar, and then...). Any
thoughts about changing that in the future? I have some ideas on how to do it, and if I find the time I might start messing around with that.
Thanks for this video, Charles...these are the kind of videos that make me reconsider applying to Microsoft (yes, I admit, I've done it before, so sue me :O ). I love looking at and putzing around with file formats, and know a little about the various
image formats out there.
With regards to "anti-fuzzing", my concern would extend well beyond image files, though...are you doing this for .wav formats, media formats and other types of files? I know with video streams the Media Player will usually balk in some way (with some type
of "corrupted file" error dialog). It is ridiculously easy to come up with your own file formats, and as an extension of that, fiddle with the ones that are out there now.
Edit: Are there any updated graphics format pages anywhere on the internet? Here is a page of older formats if anybody is technically curious (circa 1997):
Bill commented about how the web still has alot of advances to go. For instance, shopping Amazon in a 3D style interface, walking down the aisles. Personally, I think the biggest obstruction to this goal is: When you've got hundreds of millions of titles,
how practical is this, really?
Did somebody say "Chrome"?
It would be practical if Amazon was willing to deploy some type of desktop application. On any given screen of Amazon, it usually only holds about ten titles (down the center of the screen). Now, if you expand that into a 3-D book aisle, it would be a matter
of piping the title information to the client app. I don't know if it would be wise to use a browser for that or not. If you think about an aisle full of books in a typical library, for instance, all you usually see on the spines of the books are titles. Pictures
only become an issue when you slide a title out and look at the cover. What would be nice is if you could walk down a virtual aisle of books and have Amazon populate the titles with a) things similar to what you have bought before, or b) random suggestions
that might lead you to new topics.
You know what? You've just given me a great idea for something I'll do here on Channel 9. Give me few days and I'll see if I can code something quickly to show you what I have in mind. I'm not familiar with DirectX at all, so the interface might be a little
poky until I figure out something better. I was going to do another "collage" thing, but I've got a much better idea.
"I still contend that we can do better than the current set of data structures out there. The concept of a linked list is inherently flawed no matter where you stick the pointers. "
It's a long story...but I will post something about this soon. I'm quite busy right now with other things, but that should change soon. It will take me several days to assemble a post about this topic, and to put together a prototype/demo. It will be written
up in C++.
The very short version is this: the linked list (single or double) is somewhat primitive in its design. It does not have to be this way. It is rather odd that the only means of traversing between nodes is via connected pointers. That forces a user to traverse
a list, node by node, and when the list is long enough, that is time consuming. So, I'm going to build a hybrid between an array (or vector) and a linked list.
And then show you how to flip that into a completely different data structure in real time without moving any data around. The question is not whether it can be done, but how fast I can get it to work.
Then, when I'm done with that, I'll put up a "data structure" builder/designer in the Sandbox.
Edit: Tenative "early" thoughts here (subject to great changes in the weeks ahead).
Anders, you're my hero. Even if you never read my posts.
I still contend that we can do better than the current set of data structures out there. The concept of a linked list is inherently flawed no matter where you stick the pointers. Again, as I've said before, I'll post a prototype of what I have in mind within
the next few weeks.
Side note: Why do these types of videos (much like an informercial) always have to cut to shots of the audience nodding their heads? It's so cheesy.
If you think you can just google...why dont you become an MVP?
There is more to it.
To what end? To prove what I can already figure out on my own by looking in tutorials, looking online, and from actual experience? It seems to be a certification of limited value (to me). I think what bothers me most is most MVPs lack of humility.
I'm trying to figure out at what point being an MVP is anything other than someone who can Google, rehash another poster's answer, or rack up post numbers.
Knowledge of a blinking LED. Huh. So...how many wires does that LED have? What is the current flowing into each wire? How do they get one LED to produce multiple colors? I know the answer, but I can wait for you to GOOOOOOGLE it.
MGerlach, how do I become an MVP? At what point do I get achieve "dignified status" with my peers? What I really want to do, though, is learn how to TALK like that. Rapture!