earnshaw

Back to Profile: earnshaw

Comments

  • Windows, Part I - Dave Probert

    Very fine presentation!  The old mainframe that I used to work on had processes (called "runs") and threads (called "activities").  The thread dispatcher maintained context for each thread in a structure called a "switch list" (SWL) which was a misnomer because the actual switch list was a priority table of linked lists of SWLs.  Paired with the SWL was the Activity Save Area (ASA) which contained processor state (relocation information for program instructions and data, which were separated), and the contents of CPU registers the last time the thread halted in favor of a different thread.  There was no paged memory.  Programs were either entirely in memory or entirely out on the swapping drum (yes, drum).  That decision was taken to avoid thrashing, which was possible because working set theory didn't exist and it was overkill for this machine's small memory of iron cores.  The OS was written to run equally well and in parallel on all available CPUs.  By 1970, this machine only crashed once or twice a day because of bugs in the OS.  It supported dozens of users in interactive mode using teletypewriters (Model 33, Model 35) and it also ran jobs in background as "batch" processing.  A huge backlog of batch jobs would ordinarily accumulate during the day and would be worked off at night.  Many trees died to afford users something to look at as output.  The whole thing was royally poo pooed by sophisticated faculty from universities "of quality" where they had adopted Unix wholesale (and Multics before that) as the sine qua non of operating systems.  The thing they didn't like was that the interactive mode was exactly the same as the batch mode in user interface.  (The text-based user inteface was nothing like JCL:  it was NOT compiled, it comprised simple commands.)  That was quite an advantage when creating production code and for testing out production run control language.  But hell, what good is an operating system without redirection and piping and a tree-structured file system, etc., etc. Anyway, my point is that Windows NT is a very good operating system, beats the pants off of Linux in terms of out-of-box usability, and builds on the valuable legacy of the OS that I described above, which was very good for its time -- and still exists -- and still can run binary programs written for it 40 years ago if you can figure out how to read in the deck. 

    When I first read about hyperthreading in 2002, I decided that Intel had built a chip that was able to hold context for two threads at the same time.  From what I have read in response to Robert Probert's talk, I was right.  Windows must somehow schedule the right two threads on the chip so that the fast context switch in the chip can be used.  Otherwise, HT is of no value.  I imagine the top two threads on the priority queue would ordinarily be a good choice, assuming they aren't already scheduled on some other chip.  Then, when one of the two threads is blocked waiting for, say, an I/O completion, the other thread can instantly be restarted using context already onboard the chip.  There are a lot of CPU cycles to be saved by avoiding the slow context switch!
  • Kang Su Gatlin - On the 64-bit Whiteboard

    As a matter of policy, I would avoid unmanaged code in a 64-bit environment.  Of course, there is the porting problem.  People can and do write code with pointers and employing pointer arithmetic; that was peachy in the time, long past, not far removed from assembly language, where any geeky method to save a few CPU cycles or bytes of memory was smiled upon.

    Any time the underlying architecture changes, in some fundamental way, these insects fall out of the woodwork.  Mercy, with the high level of abstraction now possible, can't we stop revisiting this stuff?
  • Herb Sutter - The future of Visual C++, Part I

    I rode an old war-horse called assembly language through the bulk of my programming career.  I also dabbled in Fortran, APL, COBOL, even did some Algol and Snobol.  So it's nice to hear something positive said about the value of getting closer to the iron as the trend has been to abstract the iron away.  I get a charge out of C# because it makes doing simple things simple and increases my productivity.  And I don't have to create for the 100th time some variation on a collection class.  I had a conversation last summer with one of my contemporaries during which I remarked that today's Computer Science student may not be getting fully exposed to core concepts like trees, queues, hash tables, dequeues, stacks, spin locks and so forth because these are abstracted away as prewritten classes.  Not that that's bad in general.  It's not.  But it poses a problem for teachers of computer science who must ensure the way these things work under the hood are revealed.  Of course, this piece is about C++ which I used for many years as a systems programming language.  When I first read the C++ for .NET book, I was frankly appalled at how different the language I had grown so familiar with looked.  That's when I learned C#.  I don't denigrate C++ and I am happy to learn problems with using C++ in a managed code environment are being addressed.  For me, though, I use C++ only when C# does not fulfill my needs.
  • John Pruitt - Thinking about the customer in design

    That a product does not end up looking like Frankenstein's monster makes sense from many points view:  end users, marketers, developers.  Still, Microsoft customers receive products that present an inscrutable public face, with many controls whose purpose and rationale and existence is not at all clear from their appearance in the product let alone in-product "help" and published literature, if any.  I am reminded of the repetitive experience of relearning the IDE as each release is published.  If I perform a certain task using a certain idiom in release X then that task is performed using a different idiom in release X+1 without so much as a how-do-you-do.  There are tons of widgets that I don't care about and won't ever use.  There are some that I should know exist and should be able to learn in under an hour through some well-defined teaching aids integrated with the product.  Differences between releases should be better explained.  What is available should be made explicit.  A good presentation of the design philosophy of a release would help users be better users of that release and willing buyers of the next release.  Back in the early 1970s I looked forward to receiving weekly updates (natural language summaries on the purpose of a feature set and how to use it) on a locally developed text editor.  It was a pleasure to vicariously experience the product as it was being built and to learn each feature as it was added.  In the 2000 aughts, things have devolved so that I see only the end product with no systematic approach provided to learn what it offers and how to use it.  This goes for everything from the IDE to the operating system.  Some of the Knowledge Base articles may as well be written in Martian for all the insight they provide.  There must be a tacit rule that all such writing be overly concise, rigorously accurate, commit to nothing, and, in the end, explain nothing unless you came to the article already more or less understanding the solution to the problem that you were trying to solve.  What must be obvious to people who work with products on a daily basis, because they create them, need not be and usually isn't obvious once the product is bought, paid for and sitting on someone's desk.  Crossing the gap has no single or obvious solution, but the gap should be recognized.
  • Don Box - What goes into a great technical ​presentatio​n?

    I think nudity should become de rigueur for all technical talks.  At least, when the talk is going badly, there will be other things to think about.
  • Jason Zander - Tour of the .NET CLR team

    It is interesting to note the same kinds of memory management bugs of 30 years ago occur today.  The technology for resolving them is oh so much better, but I have often had to put PRINT statements into memory managers to detect by whom, when, why and in what order some buffer got doubly allocated or orphaned.  Fortunately, these bugs are now closeted away from application programmers.  They are left for the systems programmers of Washington State to detect and destroy. One machine I worked on had an 18-bit address of 36-bit words.  Inside the OS these addresses were not relocated; they pointed to actual hardware locations.  It was great fun when code using these addresses ended up clobbering memory completely unrelated to the code's function.  Evidence that such corruption happened could be detected literally days after it happened.  Tracking such phenomena back to the point of origin was very, very difficult.   Eventually the hardware evolved to the point that so-called absolute pointers into memory were abolished so that memory limits registers were always used and corrupt pointers usually caused an immediate, diagnosable machine stop.  But I digress.
  • Larry Osterman - His one interaction with Bill Gates (over DOS networking stack)

    Supposing the 60K for LAN Manager was permanently resident in memory: that would be almost 10% of the machine's physical memory for an OS component.  That is a serious reduction in space available for user programs.  Come to think of it, when IBM designed the PC in about 1982, they figured nobody would ever make an application that would need that much memory.  Then the desktop computer adopted all of the generally accepted principles of mainframe design (paged virtual memory, multiple CPUs)  and adopted a few more (graphical user interface, object-oriented programming) and the problem went away.  In his way, Bill was right to complain.  The machine wasn't up to the task of handling LAN Manager.

  • Amanda Silver - ​Demonstrati​on of code separation in next version of Visual Basic

    The blurb sez "how that'll help you in your Visual Basic development," but I disagree.  I see that definition of a class can be done piecemeal -- a fraction here, a fraction there -- and that that would be convenient, for example, in automated code generation scenarios.  The code behind of a form contains code generated by the IDE and code written by the end user programmer.  Those two pieces are easier to handle when separated.

    But, to return to the main point, Amanda showed me how the mechanism works, but not how it will HELP ME.  What does it buy ME?
  • Jeffrey Snover - Monad demonstrated

    This is really, really good.  Can't wait to get my hands on it.  Probably is no technical reason to delay its release until Windows 2005 Server.  Hope against hope that the user doc is really good, too.

  • Paul Vick - What has Visual Basic learned from the Web?

    BASIC is an acronym for Beginner's All-Purpose Symbolic Instruction Code.  It originated at Dartmouth University in the 1960s.  Visual Basic's roots are in BASIC.  For example, in BASIC the DIM statement declares an array and DIM is short for "dimension."  This has been artificially extended in Visual Basic to be the generic initiator of any declaration including the instantiation of an object which concept had not been conceived in the 1960s. 

    BASIC is important in the history of Microsoft in that it was the first product the company ever sold.  Some fantastic productivity features, such as Forms, first made their appearance in Visual Basic.  Rapid Development and Deployment is possible in Visual Basic.  Unassisted Windows development in C, and MFC in C++ do not admit to rapidity.   MFC is a contortion and sometimes the opposite of helpful.  Then we get to the .NET Framework, CLR and C#, which are works of art, in my humble opinion.

    Visual Basic is fine for people who have become used to used and have become productive with it.  It isn't block structured and it is tied to BASIC which turn some people off.  I prefer C# because it is a modern language and a viable alternative to Java.  In any case, it is a relatively simple matter to hand translate Visual Basic snippets into C#.