Much of this looks pretty "old hat" to even a halfway experienced VB 5 or 6 programmer - but I think
that's the idea. A lot of what .Net does is introduce C programmers (if only through C#) to many things they've missed in the past that Classic VB had.
I doubt turning off Strict eliminates strong typing, I believe that point was made. What it does is
allow late-bound operations in addition to early-bound ones. This would actually
improve the productivity of the programmer, because using the bulky explicit reflection approach requires a great deal more source code to be written and debugged in cases where dynamic object use is desired.
Clearly late binding has costs, including performance penalties. That's why one doesn't use it except where warranted. A sort costs resources too though, but you don't forego sorting just for that reason. Instead you avoid sorting lists that are properly
ordered to begin with. Perhaps an oversimplification but the same concept, more or less.
It is good to see how .Net is working to integrate the best things from both programming worlds, as well as extending some of these concepts even further.
At the link above is a 1985 patent describing how one vendor moved some resource-intensive tasking overhead from software to an auxiliary processor to improve performance once hardware began to get cheaper. Things like multiple waits had been routine for two
It is true though that Computer Science programs at an undergrad level rarely get very deep into the mechanisms supporting these things. They are probably more frequently encountered in vendor provided OS Internals courses.
It is great to see Microsoft making this type of background more available to customers.
I'm often amazed by talks like this, but then I realize everyone comes at things from their own perspectives as well as struggling to get a set of concepts across in a limited amount of time - and sometimes on the fly.
The idea that Unix and VMS are the only significant OS family lines has my sides aching from laughing though. This may be true in some narrow sense, but believe it or not there are OS families with much lengthier heritage and as much or more "success" within
The hoary old "we loaded stuff from cards on a machine with little or no OS, no disk, in a single-tasking environment" went out pretty early on. The most primitive box I ever worked on was a very early 60s IBM 1620. While primitive, even there we had disk
and disk-resident compilers. True, the card-resident compilers were still there to be used, but almost nobody did this.
As for things like virtual memory, protected address spaces for processes, and the like - commercial implementations go back at least to the Burroughs B5000 (1961). This machine didn't even offer an assembler, the OS itself being written in a high-level language.
The descendents of this platform are in use today and indeed are still actively marketed by Burroughs' successor organization Unisys.
Developers dealt with concurrency and "threading" frequently, since multiprocessor machines were quite common along with a complement of sophisticated I/O and communications processors that operated asynchronously. Such "servers" routinely supported tens of
thousands of simultaneous users through OLTP, often in regional, national, and international multi-site networks.
The minicomputer (and later microcomputer) world was a very simplistic place by comparison. Crude things like the Unix "fork" were something other people shook their heads at.
What the mini/micro ecosystem did do however was democratize computing. These systems were cheap in relative terms, and stayed so as they grew in power and sophistication. This meant that more and more people were exposed to computing, and exposed to more
But the VMS/Unix family lines are still rediscovering things that were old hat by the 1970s elsewhere in computing.
Everyone seems to be getting excited that application developers should be learning to deal with multithreading now. Have we forgotten that most machines - even desktops - are running numerous asynchronous processes and threads all day long? Pop open your
Task Manager, gee.
And in a server environment I can't believe people really find themselves running a single application. Didn't "got an app, get a box" go out of style years ago, even in the NT world?
Multithreading "because I can" is not a sensible way to architect applications. It is also unnecessary to ensure that hyperthread/multicore/multiprocessors get fully utilized. That's why you have environmental system software between your application code
and the OS. You let that middle layer manage worker threads and instances of your application code - which typically should remain "single threaded."
Well don't intermix the plumbing concepts with the thickness of the client. Whether you use the thinnest of web clients with most of the heavy lifting done at the server or the thickest of clients on the desktop, choosing between DCOM, OLEDB, MSMQ, Web
Services, or ad hoc plumbing is a separate decision altogether.
security became. Because of the vulnerabilities inherent in these powerful tools (scripting, ActiveX, scriptable controls, etc.) we ended up with a mess if we wanted to provide a decent user experience without using dangerous browser security settings.
That said, with proper code signing, IE security zone settings, etc. it was quite possible to provide quite a "rich" experience via DHTML. Such "web pages" or even HTAs could easily be either thin and rely strongly on mid-tier servers or "thick" and talk directly
to back tiers in the manner of conventional 2-tiered applications. There is really nothing to keep you from doing this today using "Web Services" as the plumbing - though most of the pre-.Net bits you need are out of vogue now.
Avalon just updates the concept and makes use of the .Net technologies under the hood, solving many though not all of the security issues and of course offering a lot of new richness from the developer's perspective.