This is not an easy post to write, and I realize it will draw some fire. The point of it, though, is to tell you that I am suddenly changing directions, but more importantly why, because I cannot believe I am the only one who feels this way.
Some brief background: I have spent the last two years learning PC programming. So yes, that makes me somewhat new to it. I am not new to programming in general, however, because I have being doing it since I was eight, and stopped when I was eighteen
because Apple shifted from the Apple II to the Mac platform, essentially abandoning droves of programmers. So I stopped programming for years, up until a couple years ago, when my interest was rekindled. This time, it happened on the PC platform. So, I
thought a lot about it again, designed a study plan for myself, began amassing IDEs, books, tutorials, etc., all in an effort to reacquaint myself with the world of code again. Something, though, happened on the way to "coding heaven".
The more books I read (or skimmed) the more I came to realize that Windows Programming is one odd subject. I am one of those people who actually likes to master a subject (if possible), but that is difficult to do if the foundation keeps shifting beneath
you every few years (3.1, 95, 98, XP). From a technological standpoint, I understand the reasons for many of the changes--video playback, the internet, security threats, etc. In order to keep up, however, it seems like I have to keep relearning some things
over and over again, and that takes time. And books. The books part is easy...borrow them from a friend or rely on the library and only buy the books which are critical to one's long term success (basically a book with a shelf life longer than six months).
Today, though, I cancelled a library waiting list request for a book on Visual Basic .NET.
In fact, recently, I had reserved some other books at the library and was put on a waiting list for them, regarding .NET technologies. One book came in: C# Essentials. Numerous times in the introductary pages, though, I stopped reading, flipped to the
back cover, and double checked to see if the authors worked for Microsoft. Nope. So I returned back to my bookmarked page, and kept hitting more marketing terms. I skimmed several other chapters, and came to one conclusion: why am I learning this? Why
am I spending my precious time relearning how to "draw a rectangle" on a canvas for the fiftieth time and in a different language? I can already write "Hello World" in numerous languages, but am I really ready to learn another? And why should I? The end
result is the same: "Hello World" in text. Sure, C# has garbage collection. Great! This technology has been around for a long, long time. But C# plays nice with Visual BASIC via the CLR! Great! Wonderful idea, really. Can I build my own language on top
of that? Sure, but get ready to read volumes of books, and by the time you have mastered those, make sure you keep an eye out for major OS changes. Compiler mechanics are difficult enough as it is, without having to fight other things. As a developer, I
need to know that there is some stability beneath me, for I like to build on principles I have previously learned, and to retain some skills at using certain libraries. Time you see, becomes more valuable to you with age, and the loss of relatives, loved ones,
and friends through disease, accidents, and age only increases its value.
So what do I sink my time into? What is worth the effort? If I listen to the marketers, why the answer is "use .NET or get left behind!" So, let me get this straight, it is better to "not get left behind" yet essentially "not master anything"? I've covered
a ton of ground as of late in terms of PC programming, but the more I look into COM, COM+, marshalling, apartment threading and everything else, the more I shake my head and think: let's see, do I concentrate on this older material and sink months of time
into that, or do I jump on board with .NET, bypassing a chunk of Windows history, in the hope that it will not matter? Or will there come a sad day when I catch up on .NET, only to find the API has shifted yet again even though I have been fed the promise
that "it won't" for months on end? Thank goodness I did not spend time learning WinG.
Sure, part of this may be personal, in that I am not drawn to other high-rate-of-change professions such as tax law, for instance, but I think there are some greater truths here. I love to code. I love the interaction of creativity, logic, and the ability
to instantly see the results on screen. I love to crunch numbers from time to time, to graph them, to look for trends, to "look at the big picture". I love to solve problems for people, build tools and to make their workdays go smoother. I have tons of ideas
for programs, from a new language, to a notetaking program, etc. But why on earth would I want to spend my time building up such projects, only to have the pieces of the API shift beneath me and rewrite code? Sure, some of this has been corrected as of late,
and some pieces of Longhorn have been held off or altered. Fine. And yes, I know great pains have been taken to ensure backwards compatability through the years. Fine. So what do I study then?
The truth is, you will continue to lose developers like me, only to have your supply replenished by new droves of computer science students coming out of college. Only to lose more developers and have to have supply replenished by...you get the idea. As
a business model, in some sense it works. Some college programs even feed right into this. Sure, it keeps your families fed, but what about mine?
Sure I have looked into UNIX. I picked up a book on it a few months ago...and it did not look too tough. I have also looked briefly into Linux, but am a little concerned about the viability of open source in the business world (translation, I need to feed
my family, too). However, I am ready to take a chance on that in the hopes that I can be a part of a change, in hopes that I can improve the product on the desktop, and finally find a venue for the ideas that I have (the fractured nature of Linux versions
is not helping things, however). Years ago I spent some time rebuilding the Apple OS (DOS 3.3) in hopes of creating that "something better" (and for fun), so in some ways I could relate to a lot of what Linus Torvalds wrote in "The Accidental Revolutionary".
I have mentioned that before here and elsewhere, however, although a marketer might tell you "that doesn't matter because it has nothing to do with the here and now". Untrue. Many of the principles I learned while working with 6502 assembly carry over nicely
to the Pentiums, but everything is on a larger scale.
I came to Channel 9, though, to see into a world I only read about in books. The more I have seen and read, however, the more I have learned, but a strange side effect is also that I have lost hope. As a result this is why I am done with Windows programming
(on any kind of deep level), and why I am also done with Channel 9.
Firstly, I can see where you're coming from. I myself have never looked upon it this way, though.
Computer science is a fast moving world. Hardware capabilities grow at an unbelievable rate, and what we, as developers/designers/engineers/whatever-you-do-with-computers are expected to do changes with it. Nobody will like it if you produce an app that looks
and feels like Windows 3.1, because people today expect more.
One of the biggest challenges with the evolving complexity of software (and hardware) is how to effectively manage it. The technology that was big a few years ago may be wholly inadequate to deal with the software projects of today. As such, new technologies
frequently emerge, and you are frequently left to guess which will be worthwile. You cannot keep up with all of them.
What however doesn't change so often though, is the programming paradigm. Back in the olden days, when memory was limited and machines were slow, programmers were expected to do it all themselves. Optimization in assembly was daily practice. As machines got
better, and the software that was written got more complex, people started to appreciate that software engineering was indeed a problem, and that things should be done to make it easier. So steps were taken to alleviate the programmer, sometimes at the cost
of a few CPU cycles. First was the move to higher languages. Next came Object Oriented programming. And nowadays you have component based software engineering. But I think that these are the only major shifts that have occurred over the years. (yes, I know
that's not a complete list)
Learning new technology is easy. The more technologies you know, the more similarities you see, the easier it gets to learn the next big thing.
Learning C# and .Net didn't take me nearly as much time as learning COM or C++. Why? Because the underlying idea isn't different. Changing the way you think about developing software is difficult. Changing whether or not you need to manually free your objects
This is a dynamic industry. Those who are not willing to learn become obsolete. But in the end, I don't believe that things change so radically really often.
What's the point of doing "Hello World" in yet another language? No point at all. The C# language is not important. The .Net Framework is important. It is the new technology, C# is just a tool to do it. If you know the .Net Framework, you can learn most of
the languages that use it in a pretty short time. Object oriented languages are object oriented languages, no matter in what flavour they come. It shouldn't take much time to learn them.
But in the end, what's important is that you have a language and technology/API that allows you to what needs to be done, and in a way thats efficient and easy. And if you're doing it professionally, a way that keeps you employed. All other concerns are secondary,
To quote a saying from martial arts:
"It doesn't matter which path you take to the top of the mountain, in the end we all see the moon."
When I read that post, I had the feeling that you are not from the younger generation (like 18 years old). But you are right, technology is evolving fast and we have to learn new things our whole life. Maybe I'm lucky to get this life long learning-message/warning
while I'm still young.
It's not only the Windows platform that evolves very fast. Example: Saturday I was at a car dealer who explained to me how the in-car computer and gps worked. He told me that he had take a training class every month to keep up to date otherwise he would be
Another example: The entertainment world. Every 5 years we have completely different gaming platform (consoles). If you were a programmer at EA games you would have to relearn how to make that "Hello World" application! (Am I glad that I can still play Doom
1 on Win XP)
When I started with programming, I learned VB6. 1 year later VB.net arrived and I started upgrading my knowledge by reading articles that explained what was different. From now on I will have to learn about those generics and Longhorn but I'm not going to learn
how the old com stuff works or unmanaged apps.
Sven Groot wrote:
One of the biggest challenges with the evolving complexity of software (and hardware) is how to effectively manage it.
I have been in IT for over 31 years and I have to agree that the complexity of software development has become a problem. Data handling in the .NET languages has become easier, but at the same time other programming chores have become far more complex with
additional layers of coding required to achieve the same results.
I believe that the KEEP IT SIMPLE formula should be applied to all software design. As an example, many years ago, I wrote a simplified interface for querying data on an IBM mini and the customer ended up writing over one hundred queries using this interface,
and with the standard interface they never even attempted to write one query. The problem was the initial learning curve. There appears to be a similar problem for programmers making the switch to the .NET languages. Microsoft might want to look at increasing
the simplification of the .NET languages by using the IDE to perform more hidden automated functions.
I really wouldn't worry too much about the change in technology, it's almost inevitable. And Microsoft aren't the only culprits, Apple have also recently changed the development model to use Objective C (although you can still code in C for Carbon) which although
I like it, is pretty obscure.
Sure Linux hasn't really changed that much but which UI toolkit are you going to learn? Gnome or KDE or just plain ole X? (C/C++/C)
Coming from 5 years of being in the Java world I was really nervous about jumping onboard the C# ship not having a very in depth knowledge of COM, but you know what? I've not encountered a sitatuation yet (2.5 years later) where I've needed to know COM in great
detail. Sure I've used P/Invoke a couple of times (which keeps the C monster in me happy) but other than that it's been C# all the way. There's no need to live in the past, if COM or Win32 interests you then by all means learn them as you learn .Net (they
will both be about for a *very* long time) but Microsoft are pushing towards .Net and that's where we have to go (not like it is painful though, C# is a very nice language/environment).
I imagine this is exactly how a lot of Mac developers felt when told they should learn ObjC - even though today you can still code in C using Carbon.
I started programming in 1979 in Basic on the Apple machines.
My fist job was in 1984 working with Ada and CICS (yes Ada not COBOL). I moved on to C -> C++ -> Java -> C#. Ada was on Amdal mainframes, C was on SCO Unix 1.0, C++ was on Windows & OS/2, C# Windows and Linux.
I have had a blast doing all this, every paradigm shift was enjoyable. I think that to stay in this business you have to embrace change, this ends up being the difference between a programmer and a software engineer. Not that the later is smarter or anything,
they can just make the shift to new ways of solving the same old problem(s).
When I started that Ada job I was working with people that had been programming COBOL longer than I had been alive. Not all of them made the transition to Ada, but the ones that did later made the transition to C without too much of a hiccup as well. IMHO
I think that most people do not enjoy change, but the really talented developers do, there is nothing wrong with not being one of these people.
Note I do think you are right that there will be lots and lots of programmers that are churned as the next paradigm shift happens and it will continue this path.
One thing is abstraction toward the end user, but I don’t agree with hiding stuff from the programmer.
I think programming is on its way to become a more specialized science on its own terms.
The complexity, diversity and security requirements require a professional mindset.
Like when you go to the dentist or your pediatrician, you expect the person to be educated and trained.
Or like when you change your tires on the car, or do some high level mechanics, that’s the abstraction the programming environment will give you. But to go under the hood, require some form of training and/or education.
There will always be place for the hobby coder, but you can’t expect programming to be the one filed that makes everything easier, when society at large, gets more complex.
Everyone starts out as a hobby coder that sparks your interests for more information,
Norwegian scientists have found that you cannot generalize knowledge in one area, to others, if they are not directly related.
For example; a company with senior it consultants, with many years of experience in a diversity of technologies, are not automatically more competent then a person just graduated out of college or university.
He will do better in the more complex situations, because patterns will emerge. But in easier programming tasks, the coder with less experience will be just as good as the one with experience.
Change is good. Change is inevitable. If it were not so, I would be writing this out on a clay tablet.
Part of what it means to have a career in your chosen field is to be a life long student of your chosen field.
I am over 18 by almost two decades. I consciously made the decision to learn “the way” of .NET because it features the best of OOP (Java) where years and years worth of my data is being held hostage (by Microsoft).
My training is in the sciences and my discipline is in writing (for humans). This implies that I produce data on a
personal level—not just on the level of a resource in some IT shop. And keep in mind that when I say the word
data I refer to everything created by the Save As… command as well as DBMS storage. The WinFS world intends to make this mindset famous.
I started storing my data on the Microsoft platform because Linux was not around in my formative years—compared to the available data management technologies featuring the office analogy (windows, desktops, files and folders—and the DBMS) Microsoft
was the Linux of my formative years, during the late 1980s and early 1990s.
In those days, using a UNIX system meant having academic privileges. Using a Mac meant being stuck with crappy tool like the early version of FileMaker Pro (Access 2.0 was far superior). Using an Amiga meant…
So Microsoft excelled in providing personal data management tools for “small business”—, which really means that the average citizen can perform data processing tasks that only huge organizations enjoyed. Trying to sell this concept to the general
couch potato public and the “average” techie nerd seems to be very difficult. So it makes sense why the people who might fall under these gross categories would “wake up” and reject Microsoft outright.
I can’t just jump up and leave Microsoft because my data is stored in too many proprietary formats (especially my richly formatted Office documents). So for one
last time, I decided to learn a new technology from Microsoft: the .NET platform featuring C#. Now that I know that this tool is available on the Linux platform (and developing on the Mac platform), I am encouraged to invest
one more trek up the learning curve. I intend to get my data into standard formats (XML-based formats like DocBook and XHTML) and Microsoft will enjoy my relatively enthusiastic support of their platform and products until this process is complete. Now
I do not have to “leave” Microsoft behind I just need to have the tools available to perform data interchange for all platforms I choose to recognize.
I see myself having a shallow relationship with Microsoft—right now, it’s relatively deep. The depth of this relationship is directly proportional to the shortcomings of their products. Microsoft “wants” me to have a shallow relationship with their products.
They want “smart” tools that can guess what I am trying to do and “help” me do what I am trying to do without much thought and study. But at the same time, they “want” me to be dependent on the Microsoft platform.
The way Microsoft and other large commercial organizations (based on the cultural values of the Roman Empire) design dependency into their products will always find conflict with me. It is an error to assume that the Linux world will not be tempted by the
desire for imperial/commercial power. It is an error to assume that I can “leave” Microsoft when I have so much data in their proprietary formats. My awareness of this fact makes me feel trapped not empowered.
I do not have to “leave” Microsoft I just need to have the tools available to perform data interchange for all platforms I choose to recognize. As of this writing, the .NET platform provides these tools.
There's one untruth in that post. Win32 will be updated for Longhorn. Sure, some of the new stuff is managed code only, but there's plenty of Win32 (or Win64, I suppose) stuff too. Microsoft recognizes that there's still quite a lot of people out there
that use plain old C++. Heck, even MS internally still develops a lot of unmanaged code.
Also, I agree Managed C++ is unmanageable (pun intended), but C++/CLI really is a lot better, and also standardised. And you can, using COM Interop, call managed code from unmanaged code as well, although you're somewhat limited when doing that.
Looking at the evolution of Microsoft OS APIs is pretty interesting. It reflects the way people's method of programming has changed.
In the DOS and Windows 3.11 and older days, the API was predominantly interrupt calls. This reflected the fact that most DOS developers of the day programmed in assembly. Windows, especially the Win32 API (designed for NT, later ported to the 9x kernel), was
designed in a time when everybody used C. So it's meant for C. Of course, the big drawback of it is the fact that C has no namespaces, so we get CoMarshalInterThreadInterfaceInStream and similar function names. When the OO movement and C++ became more popular,
the gap was filled with MFC and ATL. And today, the latest and greatest development in software engineering processes is considered to be Component Based software engineering. This is what .Net is based off.
And as for frameworks, frameworks is just a fancy word for API. You always need some kind of "framework". You say you can write your own frameworks, and I'm not saying you cannot, but you're always building on top of an existing "framework", whether it be the
C++ standard library, the POSIX API, the Win32 API, or whatever. The fact that the .Net Framework as an API is more extensive than just a set of operating system services is merely a convenience. It sort of a merge between what's provided by both the C++ Standard
Library (and with the coming of generics in Whidbey this to some degree includes the STL) and the Win32 Library. This however doesn't stop me from writing my own linked lists and trees and whatever.
If you write for Windows, you will have to use some API that accesses the Windows system. Whether you use it directly or indirectly (like with MFC) doesn't matter. Whether you use the plain Win32 API or .Net doesn't matter.
If you write for Linux, you will have to use some API that accesses the Linux system. Whether you use POSIX, X, or whatever other APIs are available, it doesn't matter. You will need at least one.
I don't know if Linux maybe has more different APIs for the same thing or not. I'm not that well-versed in Linux.
But I don't believe that fundamentally you're more limited as a programmer in Windows than in Linux. Sure, there's some stuff you can't do in Windows that you can do in Linux, but I'm sure the same is true for the reverse. And which one wins depends entirely
on personal preference and the job at hand. There's not a single one "winner" by default.
I guess we're just using different defenitions of the word framework.
And I wasn't saying the lack of namespaces is the biggest drawback of C. I'm saying it's one of the biggest drawbacks of C when you're trying to design an API that has thousands of functions.
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.