Loading user information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading user information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements


jj5 jj5 Yeah. We got goth served.
  • Solid Text Editors for Programming on Windows

    chuawenching wrote:
    I like microsoft notepad a lot. Coz it is free and comes with Windows.

    But if notepad can add on this features

    What!? Add features to notepad!? Never! Heathen! Smiley

    EditPlus gets my vote. I've been using it for years. It's not that great though.

    I do all my coding in Visual Studio. IntelliSense my friend, it's smack for programmers. Oh, and code formatting is just a CTRL-A, CTRL-X, CTRL-V away..


  • Can someone explain Managed Code in layman's terms?

    chuawenching wrote:
    Garbage Truck comes to your home and collect garbages for you automatically while you are asleep early in the morning. Smiley

    Haha. Perhaps. But us programmers still need to take out the trash.. (or stop referencing it, as it were)


  • A day in a life

    spod wrote:
    Thanks JJ5.. i'm certainly learning things Smiley

    Yep. Have fun. Try not to lose as much hair as I have.. :/

    spod wrote:
    i see the memory management issues now...The winforms, and long-lifetime objects ( delegate invocation lists ) holding refs to short lifetime objects ( windows forms ) is certainly an issue.


    spod wrote:
    Weak references can't help you here can they? ( do you control both sides of the code ). I agree using dispose for this isn't too nice, and kinda against the spirit of the thing etc...

    Been there, done that. The answer: not really. Firstly, the 'event' pattern is first-class in the framework so I have to deal with it (meaning that in many cases I do not control both sides of the code). Secondly, WeakReferences only really introduce a new set of problems. Performance is one, but not as big a problem as 'what do I do if an event gets thrown to a dead object that hasn't been collected yet?'.

    At the end of the day, I need to unsubscribe my events. So all that this 'managed environment' seems to have done for me has turn 'delete' into 'unbind', 'dispose', 'null'. It didn't become easier, it became harder. Meh.
    spod wrote:
    Wow! i never knew that, thanks!. just checked the IL and it is essentially passing a stack pointer...

    Yep. I checked the IL too. I only discovered this last week. Then I started grumbling about not having an 'in' keyword..

    spod wrote:
    i'll get back to you on the others when i've had a chance to read the threads you sent etc.. ( work might get in the way this week Smiley )



  • Are computers becoming more human, or are humans becoming more like computers?

    spod wrote:
    jj5 wrote:
    I worry about the term 'deterministic'. I beleive everything is deterministic unless it takes state from outside itself.

    If you consider 'the universe' (i.e. everything) there is nowhere to draw state from. Thus, the universe is deterministic.
    interesting. so how does stuff like radioactive decay fit in? that is a non-deterministic process right? doesn't that make the universe non-deterministic ...( can you contain non-deterministic parts, and be deterministic yourself? )

    The point I tried to make (and thus why I don't like the term deterministic) is that there is a difference between being able to predetermine the outcome and knowing that there will be only one outcome.

    Since there will be only one outcome, and we can say the outcome will be based on the present state and a set of rules, then the outcome is deterministic.

    We don't really know what 'the rules' are, but we try to determine (or 'model') these through science. We also come up with concepts to describe state, most of them are very crude. We only use the term 'random' when we are unable to describe the rules. The irony is that I don't beleive that we can know the rules, we can only know what we beleive to be true based on observation, if the rules we observe change over time (i.e. the rules are dynamic) then we won't know until we can observe a state change that didn't conform to our understanding of the model we had created to describe the rules. Perhaps it is safe to assume that 'the rules' are hardcoded..?

    At any rate, I beleive everything is deterministic. I guess it's little more than a well-reasoned beleif in fate.

    Since I can observe only one past, and I can observe the past impacting the future and the future becoming the past it's not too hard to beleive.


  • Interview questions

    Richard Acton wrote:
    Discard 1 fuse. Cut the remaining fuse in half.

    With one half of the fuse, tie it's ends together to form a loop. Tie the other half anywhere on to the loop as so:
    45 minutes will be up when all of the fuse is burnt (excluding the fuse we discarded)

    Edit: didn't see Jeremy W's answer. Thats cool that we found different solutions, but i should get a bonus point for the ascii art Wink

    Nope. Your solution is wrong. The problem is that the burn rate is not consistent. You could have created any amount of time, like this one which burns for 36 minutes:

                  /   \
    30 minutes => |   | <= 24 minutes
                    | <= 6 minutes

    This question is designed to teach you that you need to burn the candle at both ends if you're in IT.. Tongue Out


  • Are computers becoming more human, or are humans becoming more like computers?

    spod wrote:
    One thing keeps me out of the strong ai camp... i find Roger Penrose's argument in The Emporer's New Mind quite compelling ( it's a long time since i read it but as i recall it's essentially that our brains are at a very deep level quantum mechanical and non-deterministic and can never be accurately simulated by a deterministic machine in a reasonable time )....

    I worry about the term 'deterministic'. I beleive everything is deterministic unless it takes state from outside itself.

    If you consider 'the universe' (i.e. everything) there is nowhere to draw state from. Thus, the universe is deterministic.

    The best argument for this is the past. You can't change the past. It has happened. The future will become the past. Sure, there are all sorts of weirdisms about frames of reference, and we have difficulty in breaking down and modelling the very small aspects of things in our universe (our crude approximations are proving useful though). But there is only one past that affects me, the past shapes me (action/reaction), so everything in the universe is 'deterministic' (this doesn't mean there can't exist 'other universes' or 'parallel universes', it just means that 'my universe' is everything that can impact me).

    So the universe is a state machine in a constant state of flux. We can never really 'prove' the rules, but we can observe what the rules 'tend to be'. I can't think of any way to 'similate' or 'pre-determine' what the universe will do, because the only stuff that I have available to work with also exists in the universe (matter, energy, etc.) and if it didn't then I wouldn't be able to use it, because as soon as it affected me it would be impacting the universe and have to be counted as a part of it. But just because I can't pre-determine the outcome (i.e. the future) it doesn't mean the furture is not deterministic.

    I think we (as humans) need to get over ourselves. We're not that sophisticated. We're "just a virus spamming the universe with rough copies of ourselves". Anyway, since that's what we're doing, I wish we'd hurry up and get off Earth. I hope an inter-planetary settlement operation takes place during my life time.. Smiley

    My big philosophical thing these days centres on language. Am I really communicating with you, such that we are actually doing something 'meaningful', or am I just using symbols that based on my experience are likely to cause you to react in a way that I suspect is likely to be favorable to my existance in the long term?


  • A day in a life

    spod wrote:
    My understanding of the dispose pattern is that it is only necessary when you are holding unmanaged resources ( windows handles, db connections etc ), where it is important for deterministic finalization to take place ( you want the managed thing to go away as soon as it can, as it's holding a shared, expensive resource ) .

    using is just syntactic sugar for Dispose so

    using( Foo foo = new Foo() )

    is equivalent to

       Foo foo = new Foo();
       (foo as IDisposable).Dispose();

    Yeah. <g> Tongue Out

    You've got a bug in the finally block by the way. Wink

    I agree, the IDisposable interface should only be used to control the release of unmanaged resources. Calling Dispose() should be optional.

    This is not what everyone is doing. Even the framework designers are using this pattern/interface to indicate 'logical disposal' (as per my enumerator comment)

    Here's a few links from a month or two ago where I 'had it out' with a few people in another forum. I'm not sure what we really concluded in the end, apart from that I don't think that Dispose() is the place to release event listeners.

    26862, 26863, 26342, 26353, 26386, 26389, 26393, 26395, 26396, 26398

    spod wrote:
    Sqltypes implement tri-state logic due to null being a valid value in sql land. This always confuses me, but i think is the specified behaviour ( sql92 i think? ). The lack of conversion from DBNull is a pain.

    Yeah. There is no SQL92 specified behaviour for Xor() as I discovered courtesy (lUka) (who I've noticed has an account here, Hi luka! Wink.

    I know how they work. I know about how 'true' is an operator. I know about TVL, etc. I've just seen a few odditities/problems in the SqlTypes library is all I was trying to say. Specifics aren't jumping to mind.

    spod wrote:
    I don't think the ctor leak is a problem in .net? ii thought the fact that .net objects are fully constructed by the time the ctor is called ( even in base classes ctors ) prevented this problem?

    Leaking in managed code, is different to leaking in unmanaged code. You don't have 'allocated but not reachable' leaks. But you still have 'reachable but not live' leaks. The classic problem is the lapsed-listener, and subscribing to events sets you up for a fall here, unless you are very careful to unsubscribe. Matching Bind/Unbind calls.. sound familiar?

    Here's some good reading:



    Say I have a class that implements IDisposable. If the constructor throws then no reference will be returned.. but what if Dispose() was a 'required' call? There are many subtle problems that can creep in, and MS keeps sending messages that lull developers into a false sense of security.. particularly regarding memory management.

    spod wrote:
    I also thought you couldn't leak on an assignment call any more - can u post an example maybe

    You can read about it on this thread.

    spod wrote:
    my group has a lot of experience in this, particularly for .net web-apps. let me know if you have specific stumbling blocks and i may be able to help with some pointers...

    I'd love a 'walk-though' article on how to set up my application to setup the resource files, embed them, and grab stuff out of it. I tried to do this once ages ago, I swear I followed all the steps in this long drawn out process that took hours, and then.. no dice. Got a link?

    spod wrote:
    i agree that this isn't mandated in any way, and it's up to the developer to decide how to use ( or abuse these ). I think the general guidelines are: derive from applicationException and above ( dont derive from Exception); choose a flat and wide inheritence heirarchy for your exceptions; throw standard exceptions for standard cnditions; propogate the inner excepetion...

    Yeah. There are serialization problems with exceptions too, in remoting situations. Not having error numbers has caused some people grief (because of localised messages). The 'exception' objects don't even need to actually derive from System.Exception as far as the CLR cares.. this all sucks a bit.

    spod wrote:
    this is something else we have a lot of experience with. it is expensive, but in my experience not as bad as you think.

    Um, yeah. Generally this hasn't been a big problem for me either. But I have some amazingly inefficient code at the moment. I needed to serialize some *huge* amount of data (100's of MB) and deserialize it, and it took several minutes (like 5 or 6). I know, that's shoddy.. still. Adding and returning custom ValueTypes to and from the SerializationInfo requires boxing. That sux.
    spod wrote:
    the reflection cost is only payed once ( the reflected methods are held as statics essentially ), and you can attribute the xml generated to minimize the size etc.
    i've seen apps that wrote custom serializers to get round just this problem, but only shaved around 30% off processing time - ymmv of course - but before going so far as to custom serialize it all take a look at shortening tags through attribution ( and maybe compressing the xml contents if that's an option ) - i beleive the serialization process is O(n) on the size of the document being serialized...

    Boxing is probably what costs me, more than reflection (just guessing). This item is still on my plate, haven't really had a close look yet. info.AddValue(..) takes System.Object and info.GetValue(..) returns Object. I implement the ISerializable interface, with GetObjectData(..) etc.

    spod wrote:
    i hear you on this one. if we are talking about the same thing, custom soap extensions are one of the few ways here, and i wouldn't call it easily exactly Smiley

    I'm just talking about the namespaces that get embedded in the artifact that use your types name and version.

    spod wrote:
    the += and -= operators being used as aliases for delegate.combine is particularily horrible i agree Smiley - it always makes me cringe..

    Empathy is nice. Smiley

    spod wrote:
    do you have access to pdc bits at all? there's some cool stuff coming in whidbey / longhorn with all the objectspaces work. I agree it's roll-your-own these days.

    Nope. My MSDN subscription lapsed last year. I'm in no hurry to part with thousands of dollars to renew it. I've read a fair bit about it online, including some of the specs. I'm not sure how wonderful ObjectSpaces are going to be, but like you said, I've already had to roll-my-own, so I'm going to have to maintain my own for the next few years too. I'm committed to them now. Hopefully, ObjectSpaces suck, that way I won't have wasted two years of my life. :/

    spod wrote:
    i don't think you can get a perf benefit by passing a struct by ref. not sure here, but i think it causes the struct to be boxed to an object, and a pointer to that object to be passed, negating all the benefits etc...

    Order of magnitude my friend (in most situations).

    When I say 'by ref' I mean with the 'ref' keyword, not with a 'reference type after a box'. I.e.

    void DoSomething(ref MyValueType value)


    void DoSomething(Object value)

    spod wrote:

    jj5 wrote:
    My biggest problem is "how to manage memory in a managed environment", sure we don't 'malloc', but 'new Thing()' has the same effect, with less explicit control (it's not like we're not using memory anymore). When I see DataBinding internals using 'WeakReference' I worry..
    I'm not sure i follow this one, could you maybe post an example? memory management is one of the least of my worries these days except with interop, or is that what u mean?

    I'm talking about not really knowing how I'm supposed to release event subscriptions. Say I have a class that derives from System.Windows.Forms.Form (view/controller) that hooks a few event listeners to a custom business object (model). When I 'close' this form I need to unsubscribe from these events, or else the multicast delegate invocation list will keep a reference to my form, keeping it in scope and not a candidate for collection. I'm not sure when to unsubscribe. The situation is complicated by the fact that the form in turn contains controls that also hook events, and event things like ListViewItem classes that hook events. Not everything implements IDisposable, but even if it did I'm not sure that unbinding in there is a good idea because it changes the semantic from 'optional' to 'required or else you leak'. If Dispose is no good, then how do I manage releasing events? At the moment I've got a 'verbose' method that uses a custom IView interface and propogates 'unbind' calls down an 'ownership' tree.. but this is pretty long winded, and cumbersome. I.e. I'm writing thousands of lines of code that are purely 'memory management' code to avoid memory leaks..


  • A day in a life

    (NOTE: um, lucky for me I had a copy of most of this post in my clipboard, my session timed out before I hit submit and my post bounced, I hate web based systems. You guys wanna bump the session timeout a little?).

    spod wrote:
    Hey jj5.

    This is good feedback - could you give us specifics though please?

    Sure. This is really quick (yeah, so quick my session timed out.. Tongue Out), so forgive shortness, spelling mistakes, lack of explicit details, etc.

    WinForms databinding sucks. It gobbles exceptions (very bad thing to do from my perspective, many apps are left in an indeterminate state once they throw), I can make it leak with little effort (i.e. while trying not to), and I'm pretty sure it just 'glues' event listeners permanently to bound objects then manages a weak reference to same, although I'm not certain of this. DataSource is 'object'? Come on..

    The WinForms validation patterns suck. Validate() returns a value and causes side-effects, it's anyone's guess where the input focus ends up, I can make this fail (again, while trying not to) such that an attempt is made to bind data that I invalidated. Validation code is defined and runs in the user-interface? No problems if this is what I want to do, but I don't. I want full support for a decent MVC pattern 'ready to go' in WinForms, the closest I have is support for events, which causes me memory problems.

    There are many bugs I've stumbled across in basic UI controls. Reading a value from a NumbericUpDown for example can cause a TextChanged event to fire by side-effect (or something like that, from memory, maybe the other way around). TabControls are nothing but pain. Visual controls are all pretty basic (i.e. not very pretty, or consistent (e.g. compare a toolbar to a menu)). Visual inheritance 'barely' works and I 'lose' stuff specified in the designer all the time. Not to mention problems that I've had with VS.NET while using the designer that caused it to leak to utilise over 1GB of RAM within half an hour (I suspect a Dispose/events problem?) requiring a restart. Each bug requires a 'work around' that I have to implement thoughout hundreds of thousands of lines of code.

    Lapsed-listeners crippled one of my apps because I had assumed that the .NET events implementation would 'magically' take care of unsubscribing events for me (as a part of garbage collection), how wrong I was. I've since discovered how primitive it really is and have been prodding for easily 6 months and have not been able to get much guidance on how to handle this problem (feeling very much on my own). This is a serious 'memory management' issue for me, while MS claims issues like this are *gone* in managed code? I asked about this on Channel9 the best part of two weeks ago but didn't get a response.

    The Dispose() pattern is terribly specified. I really don't understand when and where I am not supposed to use it. I see it being 'butchered' by framework classes (on an enumerator to indicate 'completion' or 'logical disposal' in the C# 2 spec for example? What's the go with the Disposed event on Component, will that always fire if Dispose is optional?). Some say Dispose is only for managing the timely release of unmanaged resources, others say it's for anything you want to use the 'using' keyword on, others say it should be called if it has it, others say it is optional, there are traps with finalization, no-one really knows what a 'managed resource' is, etc. What is 'managed' about this? I can use this stuff, 'my own way', but these are system interfaces, surely they should have rigid implementation contracts.

    The IComparable interface doesn't specify how to treat ValueTypes. What of ValueTypes that represent a 'null value'? Why should Int16.CompareTo(123) throw an exception and not allow for an attempted conversion? There is little direction (or discussion) that I could find. ValueType derives from System.Object but I can't assign it a null reference (I'm not saying I should be able to, just that this causes anomalies).

    What's going on with Close vs Dispose? Was that a bad decision?

    What of the difference between implementing an interface and exporting an interface? ControlCollection calls Dispose on the Control interface for example, but the doco claims (correctly I believe) that the IDisposable interface is to be called. I haven't had any direction on the rules regarding 'reimplementing' interfaces on derived classes. The 'using' keyword on the other hand always calls on the IDisposable interface. Form.Dispose causes a change in state to ListView items it owns before calling ListView.Dispose(), it sucked to find that out the hard way.. by myself. What is the go with tight-coupling like that?

    I've seen strange conversion behaviour in the SqlTypes library ((SqlBinary)null for example, fair enough I guess..?), along with stuff like Xor() that defines treatment for null values, and contains 'set like' operations which I'm not sure are useful in imperative code. There is no defined conversions for DBNull. There is no support for nullable types in the UI controls, etc.

    I had serious problems trying to figure out how to bend the ASP.NET controls, etc. to my will. Ended up implementing my own HttpHandlers (which I thought were cool) in the end. The ViewState was a problem for me (not understanding how to exploit it, control it, etc.) the graphical designers where far from helpful (I was way more productive when I stopped trying to do battle with them and just spat text at a stream) and trying to manage a WebForm 'as if it were a WinForm' just doesn't work for me (unless I'm writing 'throw away' code, which doesn't exist right?).

    What of throwing exceptions from constructors and the potential for creating memory leaks since 'this' is addressable and can be externalised (trivially, by subscribing to an event) while in the constructor? There is no guidance on this. Where is the BOLD PRINT explicitly stating that when I use '+=' I'm passing a reference to my class to another class that must be removed to avoid a leak?

    I haven't found it easy to create 'localizable' apps. Admittedly, I haven't tried too hard. But then, who has the time? Where is this 'pit of success' you speak of?

    I'm not super impressed with the exception management patterns. They're good, but their not great. I can throw a bare 'Exception' for example, or throw 'system exceptions'. I think more could have been done in this regard. I'm not sure that relying on polymorphism was necessarily the way to go.. most code I see just uses 'throw new Exception("unlucky");', no requirement to propagate the inner exception, and so on.

    I'm not sure that I love the support for serialization. What I played with for XML serialization was extremely limited, and relied too heavily on attributes for my liking (i.e. exclusively?). Deserializing with the (reflection based?) private constructor on a class marked as [Serializable] is sloooooow. I think I'd have to write unverifiable (unsafe) code to speed this up, short of implementing my own serialization framework (non-trivial, graph problems, etc.).

    Serialization with the SoapFormatter ties the serialization schema to the type version, I don't know how to avoid this easily.

    Much of the doco sucks in general. For me the problem with doco tends to revolve around not enough rigid and correct detail. I know it's hard, so I don't want to come off as 'sledging' anyone. Sometimes the opposite problem exists, there is 'too much' information, and I just can't allocate the time to consume it. Sorry, damned if you do, damned if you don't. Tongue Out

    What the 'event' keyword actually does to your class was not at all intuitive. Nor were the +=, -= operators. Now that I know, I'm happy enough, but I'm not sure that I'm the majority on this issue (many of the devs I speak to have never thought about what happens when they subscribe to an event).

    I have a 'configuration' interface for reading app settings, but have to use an 'xml' interface for writing.

    I've had to come up with my own tools and patterns for managing Object/Relational mapping and optimistic concurrency in a distributed environment. DataSets and DataBinding have been either 'not powerful enough', 'too buggy', or 'too difficult to implement in complex scenarios' for me to use. I can get 80% of the way to where I need to go using these methods, the last 20% requires me to stop using these methods and DIY.

    Managing boxing can be unintuitive. There is an 'out' and 'ref' keyword but no 'in' (or 'const') for passing structs to methods. Performance can be significantly improved by passing by ref but I can't communicate my intentions with the language (let alone ensure they are enforced).

    I've been on the receiving end of plenty of 'incorrect' advice over the years as we all plough forward trying to figure this stuff out (fire and forget thread leaks and Close vs Dispose spring to mind). And a part of me really doesn't like having to give you guys feedback. On the one hand I'm your 'customer' and I'm telling you what I need, sure, but on the other hand I'm a software developer just like you who's spent countless hours figuring out what's wrong with your platforms then I spend time telling you so that you can fix it then I part with thousands of dollars again next year so that I can buy it off you and figure it out again.. I'm not having delusions of grandeur, I'm just upset that I have to waste time trying to figure out how I'm supposed to use the stuff your shipping, particularly when it seems like grassroots or 'first class concept' stuff, like 'events', 'resource management', 'exception management', 'system interfaces', etc. that you are supposed to have looked after for me, but that I feel are too rough aroung the edges. To add insult to injury all anyone seems to want to talk about these days is Longhorn, Yukon, and Whidbey, or just make sweeping comments about how wonderful managed code is. I'm using .NET v1.1 on Win2k with IIS and SQL Server 2000 and I'll be deploying apps that are expected to run on those platforms for at least the next 3 years, I don't know how comfortable I am about that at the moment.

    Many of the books I've read on .NET have been wrong, oversimplified, or misleading. As is much of the publicity that I see (compared to the reality that I see) and online material. The samples that I've looked at that ship with VS.NET are a joke (i.e. built in memory leaks in the WinTalk sample for example (from memory)). This is not a mature technology, but all the info coming from MS is about how it is becoming obsolete..? Will you be calling my code 'legacy managed code' before I ship it? Whidbey may be a mixed blessing, half of my code will likely become redundant with the introduction of generics, that might be good, but it kind of sux when I'm going to be writing more of it just after I send you this.

    The ".NET" brand name _really_ ticks me off.

    At least I'm fortunate enough to be deploying in a controlled environment. I've seen people asking about all sorts of stuff to do with deployment that was supposed to be simple, and apparently there are still plenty of systems that don't have (and won't install) the framework.. problems with code access security, 'decompilability', etc.

    Sorry for the disjointed nature of this post. I started at the top and typed everything that sprang to mind as fast as possible. My biggest problem is "how to manage memory in a managed environment", sure we don't 'malloc', but 'new Thing()' has the same effect, with less explicit control (it's not like we're not using memory anymore). When I see DataBinding internals using 'WeakReference' I worry.. basically, I suffer from permanent information overload, and when I find bogus, unclear, incorrect, or buggy doco or framework implementations that waste my time then I take one step closer to the point where I curl up into the fetal position and start sobbing. Help me damn it! It's not that I don't want to think hard about programming, it's just that I want to focus on business problems knowing how my tools are going to help me, but my experience with .NET to date has been focusing on framework issues and how they are hindering me.

    Perhaps I'm expecting too much from my tools, and not taking enough responsibility for myself? I reckon I'm pretty familiar with the framework, and I reckon I'm a decent developer, but I've been 'doing battle' with .NET for years, it has been very far from 'the light of my life', much closer to the 'bane of my existance'. I guess I'm spoiled in many ways too, it's much better than what MS has shipped in the past, and I understand that it's a mammoth effort, but when I drill down into the nitty gritty I generally find the dirty laundry and know that so much more could be done or find that I can't get the direction that I need. Ooops, now I'm ranting aren't I..

    Um, the C# syntax is nice.. Tongue Out (although VS.NET doesn't syntax highlight the add and remove keywords)


  • Interview questions

    I've been goth-served!

    prog_dotnet wrote:
    The stereotypical programmer is a shy young man who works in a darkened room, intensely concentrating on magical incantations that coax the computer to do his bidding.

    Not shy behind a keyboard. Wink

    prog_dotnet wrote:
    He can concentrate 12-16 hours at a time, often working through the night to realize his artistic vision.

    16 hours. Minimum. 

    prog_dotnet wrote:
    He subsists on pizza and Twinkies.

    Cola and cigarettes.
    prog_dotnet wrote:
    When interrupted, the programming creature responds violently, hurling strings of cryptic acronyms at his interrupter—“TCP/IP, RPC, RCS, SCSI, ISA, ACM, and IEEE!”

    This is a somewhat dated sterotype. The programming creatures have evolved an apptitude for TWJ (tri-word-jargon) required to impress management. A more modern response would be:

    envisioneer turn-key experiences
    engineer cross-platform e-business
    target efficient e-markets
    optimize web-enabled action-items
    reinvent seamless relationships

    prog_dotnet wrote:
    The programmer breaks his intense concentration only to attend Star Trek conventions and watch Monty Python reruns.

    Once again, this is a dated sterotype, still embraced perhaps by those that for some twisted reason want to be perceived by others as programmers. For my money, South Park, Futurama, the Matrix and the LOTR are where it's been at for many years now.

    prog_dotnet wrote:
    He is sometimes regarded as an indispensable genius, sometimes as an eccentric artist.

    But more usually he is regarded as a *.

    prog_dotnet wrote:
    Vital information is stored in his head and his head alone.

    This is still true. Not for lack of trying mind you. Oh for an expressive programming language!

    prog_dotnet wrote:
    I dont know what your beef is, but there are a lot of people that are not that knowlegble as you apparently are.

    Flattery will get you everywhere. My 'beef' was being lectured about control stuctures in Visual Basic, aggrivated by the fact that you didn't even comment on why you felt this was necessary (when, in my view, it wasn't, or at least appeared to miss the point).

    prog_dotnet wrote:
    “A prudent question is one-half wisdom.”
    —Francis Bacon

    That's cute. What's the other half? What of a question that is not prudent? What of statements out of context?


  • A day in a life

    Jernej wrote:
    I think it really boosts productivity. Well, got to fix a bug here and there and I'm done... A few hours later... still fixing that bug, which was supposed to take me just a few minutes... And so a day goes by and I still have no idea how to fix that silly bug.

    Turn a day in to a year or two, and then you'll know about my experience with same.

    The closer I get to .NET the more unamazing it becomes. Obviously, some of it is awesome, but much of what is awesome about it is just the culmination of what other tools have learned or been doing for a long time. Some of it also sucks, and I don't feel that MS have done a very good job of telling developers how they are supposed to use it. By the time I'm done on my current .NET project, I might as well have implemented in C++, VB6 or Java, as C#.

    Those are my comments from the perspective of a developer who's been using this technology in the trenches for the last couple of years.