I think the reason why you had compiler developers saying that "F# was their favourite dynamic language" is more due to an issue of your refined (limited) definition of "dynamic", in this context. ie A question that (mischievously ?) sets up the answerer
to fail your implicit, scope-limited defintion... [A]
This kinda reminds me of some experients that were done on children of varying ages when asked "which of these two straws is taller". The children were individually sat down at a table with two identical straws (length, colour, etc). However, one of the straws
was moved further away. The three year olds (I'm not actually sure of this age, but it'll do) always selected the straw furthest away. In other words, the problem was actually a mismatch between the experience of the questioner and answerer. The questioner's
definition of "taller" was more refined than the child's. However, once those children had been shown what was actually meant by the question, they moved the straws next to each other, and (effectively) answered "they are both the same".
PS: Hey, I couldn't resist given Anders' little dig about Academics and functional programming...
At any rate, I am more interested in what the postscript contained from a developer feedback point of view, assuming you were venting about networking or C or something that the interviewees (most important, always) could engage.... C
Ah, that's why said it was detracting.
I bitched about about the use of the word "pound" for the "#" symbol (hash or hatch). Then went on to question why it was cool to say "whack" instead of slash... grumpy old man stuff... there, I said it...
In the agnostic C code example, where the Winsock "service" is IPV4 only, but the PC on which it is running has IPv6 as well, surely there is going to be a performance/user experience issue related to timeouts, unless I'm misinterpreting what was said. eg If
getaddrinfo() returns multiple IP Addresses, IPv6 and v4, the IPv6 ones are listed first, right ?
EDIT: Removed detracting (grumpy old man) postscript.
I should preface what I’m about to say by stating that I have NOT used any version of Vista/Longhorn (that I’m aware of).
Anyway, with regards to UAC, is there a reason why (legacy) applications which require “administrative” privileges don’t get “sand-boxed”, essentially being run in there own “app-domain”. Something akin to what Windows 3.1 first did with DOS applications, but
extended to include all of the stuff that UAC is overseeing – a virtual Win32 subsystem for each offender:
When it’s first encountered, the UAC dialog informs the user/admin, then sandboxes/partitions the application.
Obviously it would not be easy to implement, considering things such as state (remembering, merging , etc). But I imagine that it would provide a better end-user experience, given the aims of the initiative. Yes/No ?
PS: I like the Freudian-slip at 38:12 "With respect to shame, ah, shape of our name-space"...
Anil,thanks for posting your thoughts about Stored Procedures and, more specifically, their history – it’s good to see that at least some of the “folk-lore” of Stored Procedures that I have heard, do have some basis in fact. It also, perhaps, explains the
“enthusiasm” of some the responses (sensitivity to Ba$tardisations of the past).
So what was I raving on about ?
As I stated up front, I do not have an in-depth understanding SQL, and more specifically, SQL Server. I see, now, that some of the things that I referred to as “Business Logic”, fall into the category of “Validation”, and can be achieved without the use of
(user-defined) “Stored Procedures”. Though, I suspect, these functionalities would actually be specialisations of Stored Procedures, behind the scenes.
It’s with this naivety that lead me to propose ways of working-around things that have already been side-stepped via this Data-centric logic: “Appropriate Synchronisation” (between SQL Server and SQL EveryWhere) being achievable via Code-Analysis and/or Code-Attribution
of Stored Procedures.
So thanks for taking the time to point this out. Though in hindsight, it is a little embarrassing, on my part – I knew that, sort of thing…
Anyway, I had a sense of deja-vou when you talked about memory-based databases.
I’ve been pretty much a low-level, bits’n’Bytes developer for a couple of decades, now. And the reason why I just recently had to start using SQL was due a sub-contracting job where they were building an Account Reporting (read-only) application that needed
to have Query responses in tenths of a second. The problem was that the “Database solution” was quoted as requiring a dozen or more Servers, and a similar number of Administrators !?. Well to me, this sounds like someone wants to buy a new house, or another
agenda is at work. But, they’re the experts, I suppose.
Anyway, I was asked to (re)write a pilot of memory-based data-storage engine, on the cheap, and I decided to start from scratch, but without all nice, flexible stuff such as being able to define the structure of the data, instead opting to hard-code this
– the existing RecordSet/DataSet stuff I looked at being far too slow – less than 10K fine. Some were OK to ~100K, but forget a Million or more. So, the end result was a COM object that has no trouble storing (caching) 3.5 Million records in ~200MB of RAM,
and it’s methods being able to return results (hard-coded queries) in the low tenths of a second – 25 Test clients continually hitting the COM server, 6 threads (usually). Onsite, I estimate they’re loading between 6 and 9 Million records – at least in the
stress testing. 4GB RAM in the Server.
This seems to have been successful, and now I need to evolve this. One of the things they’d like, is to be able to use SQL in their queries: “Yeah, OK. And for my next magic trick …”. So now the question (yay !!):
Would there be a way to cherry-pick features out of SQL EveryWhere, so I could, for example, use the Query Engine ? And better still, be able to “pre-compile” queries, in place of stored procedures ? - the aim, obviously, being able to replace my existing
hard-coded “storage engine”.
PS to Site developers: My paragraph formatting is being lost. I hope it isn't lost when I post this edit (again)...
Anil, during the interview, you basically dismissed (but justified) the loss of stored procedures. I then started musing over this. I suppose I should preface what I’m about to say by stating up front that I’ve only recently started to work with Relational
databases (about 15 months, ago), so if I’m off-base, please be kind.
Anyway, I basically asked myself, “What problems do stored procedures solve ?” and “Is there a way of leveraging these solutions, so that business logic follows, or is synchronised along with the data ?”. Compounding this is the issue of Synchronisation, itself.
Synchronisation, first. A few days ago, there was a presentation at the Sydney .NET users group by Geoff Orr, where he discussed the use of Partitioned Tables, a new (?) feature of SQL 2005 Enterprise, and that seems like a perfect fit for disconnected data
sources. In this scenario, each disconnected-user maintains a Partition within each of the (Partitioned) Tables on a Server, and thus user synchronisations are “sandboxed”. Any thoughts on this ?
Now moving to the business logic. Is the reason why your not supporting stored procedures centred around the SQL engine’s size, and if so, would there be any benefit to re-examining how Stored procedures are implemented ?. Eg The stored procedures actually
being (re)compiled into .NET Intermediate Language, extending IL, if necessary ?
I suppose I should actually read up on how and when SQL compiles stored procedures, but with experts around... . What I’m picturing is that TSQL is initially compiled into IL, and the IL form of the stored procedures are synchronised. If appropriate/necessary,
the IL form is then further processed (JIT-compiled) by the respective data storage engine.
Obviously, there are going to be issues with what or how much of the business logic needs to stay with the data (appropriate synchronisation). The point being that at the enterprise-level, we can continue to move business logic into the data-tier (following
best practices, yes ?). I’m imagining some form of (high-tech) code analysis, or a rudimentary (programmer-controlled) form of procedure tagging (attributed code).
I wish I had more time to look into WinFS and LINQ. I mean, does LINQ surface stored procedures as methods of a (database) object ?
EDIT: added "to be" to the 2nd last paragraph, near the start.
Wow. Even though this thread is really old, I just had to respond.
Years ago, I had these exact same thoughts, the genesis of which was probably triggered by Helen Custer’s Inside Windows NT, and probably Gordon Letwin’s Inside OS/2 - tripping over my old, disused PC’s (22 at last count) also provided constant reminders,
until I eventually made some shelves and moved them out of the way. The hoarder’s stubbed toe theory…
Back then, apart from processor speed and memory constraints (only ?!), the biggest problem was that the processors didn’t have any, or the appropriate hardware support to allowed “protected” operations, and I really loved strongly-typed languages – I simply
built better code.
Then I’d heard about something called Pseudo-Code, and I quick realised that combining the two ideas may lead to solutions for the lack-of-protection-hardware problem. Actually, I was already familiar with the idea of Pseudo-Code, but I knew it by the it’s
implementation: BASIC (tokens, etc).
Anyway, at that time, the tools (IL, compiler, code analysis, etc) were obviously not around, and I was not able to build the tools – well if I’d won the lottery, I’d probably just be getting to this point, now. So, it’s great to see a lot of this stuff is
coming into fruition. It’s even better to see that you have the compiler people firmly entrenched in the team.… maybe I should have gone to Uni…
Anyway, just like with Java, I’ve been meaning to learn and actually use C#. I see that C# has one the things I ranted about, at one point: structured source code commenting, which aids the programmer in producing self-documenting source code. Though I’m not
sure that I like it’s HTML-like “tagging” implementation.
Good to see this happening. I was really pissed when the UI stuff was moved into Kernel in NT 4.0. For Workstation, fine. But for Server... lets be Phil-osophical here: What were they thinking.. (Dr Phil).