samdruk samdruk samdruk

Niner since 2004

Software Engineer, Michigan, MIT, Robotics at SSL-LOOP, Zortech, Cognex, Cygnus, LinkExchange, Microsoft bCentral, Microsoft Jade Studios, Microsoft WinFS, SQL Server, Data Programmability



  • Expert to Expert: Brian Beckman and Sam Druker - Deep Entity Framework

    Charles wrote:

    I'll see what I can do

    Pass. I'm already overdue for a dunk tank (sorry Jenn).
  • Expert to Expert: Brian Beckman and Sam Druker - Deep Entity Framework

    staceyw wrote:

    Thanks to all.  Very informative.  However I am still wondering where the business logic goes?  My brain wants it to go inside the EF so I can code it in 1 place using c# (for example) and not have it sprinkled all over the client, BLL, and sprocs in the server.  Is this the case?  Do I use code-behind to add my BL inside the EF and it travels with the EF?  If so, great, I would like to see a video on it.  Also, personally, I would like to never have to switch into tsql again as I don't like moving between DSL and keeping those all fresh in my head, plus all the added complexity.  Will we be able to code our sprocs inside the EF using .Net?

    In my own view of the "state of the art" app model BL is specific to a tier [Don't ask me about tier-splitting yet--I'm an engineer not a scientist]. Yes code-behind for mid-tier or client logic but your sprocs are still in your data-tier with that dev environnment. Architecturally speaking, I do anticipate being able to write "BL" type sprocs in the database with Entities. Of course, you can write sprocs inside SQL Server using VB or C# today. For certain scenarios that may be a good head start.

    In my experience I find schemes for"traveling code" or "code shipping" a big red flag. Distributed deployment is hard enough; adding a self-modifying aspect can get crazy pretty quick.

    staceyw wrote:
    Also, as you hinted to.  It seems to me that the model I really want is have the EF be exposed as a service on the server side.  Then my client can connect to it (using Linq) using a client side EF provider.  So it will look and feel like DLinq, but will be talking to EF front end instead of ADO/DB directly.

    And what I "really" want to do someday is this from the client:

    Future<Customer[]> f = Future.Create(() =>
            return DB.Customers.Where(c=>c.Active==true);

    ObjectDumper.Write(f.Value); // Wait and print active Custs.

    So this "ships" the Func to the Server (i.e. does not convert to TSql).  And the server processes Func directly (like a dynamic sproc). So it is like directly passing a sproc right into the server and getting results.  Objects are not created or serialized until rehydrated on the client, the stream from server to client would be byte[] (i.e. not xml).

    Thanks much.

    Sprocs are the "material in the room", as Brian would say. But I can imagine a world where the queries are shipped over directly as canonical trees. This would be like the very first SQL QP which was a new front end basically grafted into the existing QUEL pipeline. The EF architecture was inspired by that example in fact. The EF design was faced with needing to support multiple query syntaxes from day 1.

    On the serialization format, note that TDS takes care of all that, just like in pre-EF ADO.NET, ODBC, JDBC, and OLEDB. Whether or not you get XML scalar types.

    Small soapox moment, diving into the opening: 

    Of course "select c as customers where" is itself not a func. It's an expression. Further, it's an expression whose evaluation can be manipulated in very interesting ways by an optimizer. Even better, it can also be composed while maintaining that set of properties. And finally, the monad itself can be treated by the language as a unit of remotability. All 4 of those things are very very good for high-performance, low-obscurity database programming.
  • Expert to Expert: Brian Beckman and Sam Druker - Deep Entity Framework

    MetaGunny wrote:
    I never understood the entity framework, but now I do.  It seems very interesting.

    Thanks, me too.
    MetaGunny wrote:

    Currently, I have a data access class I wrote called "StoredProcedure".  I then have code generators to create the stored procedures, and the VB.NET classes.  It's specific to the type of operation, such as insert, update, delete, select as output params, select multiple, etc.

    Although I think it's very clean, replacing all of that with the entity framework, if it's clean and is performant, would be great.

    However, the stored procedure class I created offers quite a bit and has some intelligence in it.  For example, formating the sproc name, validation, not adding certain parameters if they don't pass validation and that parameter has a default, etc.

    The EF engine and generators have hooks to enable common validation scenarios. I can't speak to the sproc renaming off the top of my head--let me see if I can get Pablo or Tim or Steve to stop by for an answer.

    MetaGunny wrote:

    Also, I plan on adding possibly some type of load balancing to it, possibly based on the stored procedure name (or parameter specifiying read only or data is modified by this sproc), and also user based specific connection strings.  (This way, for example, you can setup mirroing database, and have the read-only stored procedures executed on the mirrored database.)

    I'd be curious to see if the entity framework would be able to do this as well, and\or whether or not you can inherit and modify certain objects that are responsible for this.

    One nice side effect of doing a bunch of metadata plumbing under the hood is that we can start to regain some of the flexibility with conn strs.

  • Expert to Expert: Brian Beckman and Sam Druker - Deep Entity Framework

    Nomenclature comment first, since I get myself confused. ADO.NET is basically at the third major version as of this imminent (not-quite-Orcas) release. Much of the Entity Framework parts of ADO.NET (like the mapping/entity provider, the object facade that builds on top of that to provide ORM API's, bridge, eSQL and LINQ support are the "v1-ish" boxes. Some boxes, the updated .NET Data Providers, command query trees, and updated DataReaders, are a bit of both.

    The EF v1 doesn't do multiple data sources. Fundamentally, it does "reshape" data in a queryable and updateable way, based on the rich model described in the EDM and a set of mapping corresondences. Our core scenarios are around core data access/independence. We will bring the reshaping capability to "disconnected" (don't want to confuse with "offline") programming ala DataSet soon (in a release to be named later, barring acts of force majeure, etc, etc.).

    That's an important step along the way to compositing data from multiple sources in the way you describe. Then we'd have the part of the problems that let's a programmer explain what span of data they want, when it shoudl be refreshed, how to push changes, etc. Multiple sources gets complicated with further details like cache coherence and isolation levels and transactions and all that, um, database stuff.
  • Pablo Castro, Britt Johnston, Michael Pizzo: ADO.NET Entity Framework - One Year Later

    We do intend to let him find his desk first, of course.
  • Peter Spiro: The power of having fun, building great databases, and leadership

    Yes, it's real. He shaved it once about 15 years ago, or so I've heard.
  • Erik Meijer: Volta - Wrapping the Cloud with .NET - Part 1

    <shameless plug for Erik>

    Erik has also been a technical contributor to Microsoft's XML runtimes and tools, the ADO.NET Entity Data Model, LINQ (in a variety of forms), the CLR, CLR compiler labs and of course SQL Server.

    </shameless plug for Erik>

    Yes, I'm biased.
  • Polita Paulus - BLINQ

    cbenard wrote:

    Thanks for the answer about using SPs with LINQ and Blinq.  For clarification though, am I to understand that if you do standard LINQ queries like "from c in .... where .... select...." that it will not be optimized in SQL sql server?  If I'm understanding that correctly, it is simply passing "select ... from ... where..." to SQL server as an ad hoc query if you're not calling SPs.

    Thanks again.

    LINQ queries against SQL Server databases do get translated into SQL statements that get sent to the server itself for processing. Of course, SQL Server does optimize those queries like all others; that is you don't have to wrap SQL statements into a stored procedure in order to optimize a query. Stored procedures are good for wrapping up more complex computation than query and view management (updates, especially) of course.
  • Brian Beckman: Monads, Monoids, and Mort

    Ion Todirel wrote:
    Mr. Beckman wrote: We want to turn Visual Basic into not only the best and most popular programming language in the world but the most advanced programming language in the world.
    right, there are not such thing as "the best programming language"! Microsoft Research guys should know this better that anyone, advanced... would be nice to explain how... and sience C#/VB share same runtime how VB can be more advanced than C# ?

    One minor correction: Brian is not in Microsoft Research anymore. Brian and his team (including Erik Meijer) work in the Data Programmability team in SQL Server (as do I, for that matter) and do a lot of deeply collaborative development with the VB product team.

    Brian knows as well as anyone how high a bar it would be to be the "best programming language". I'm happy he's trying that hard, and I think it will be pretty interesting to see how far it can go. As for C#/VB not being different, let me say they absolutely can be. Sharing a run-time does not mean they are limited to the same semantics and feature sets across the board (as programming languges). Consider the recent IronPython or PHP.NET releases--they use the same runtime as C# but have very different features (and in different dimensions you might consider one or the other more advanced). I can't quite remember if the Haskell implementation in .NET Brian was playing with last year was publically available or not, but it falls into the same camp. 


  • Jim Gray - A talk with THE SQL Guru and Architect

    For those keeping track at home, Jim is the godfather of TP as we know it and an ACM Turing award winner. Jim is also quite an influence to many of us on the WinFS team (and the SQL group at large).
  • Shishir Mehrotra - WinFS beta 1 team meeting

    Nope, that's not what we're doing. We can tell from the stream open call that it's going to be a win32 stream access. Once we process the path and name information, we get out of the way and hand back a handle to the NTFS stream. Try it out, you'll see.

    Also, in response to your earlier posts about stream data: we have a metadata handling infrastructure that enables these kinds of things that you talk about. We don't provide decoders for every kind of metadata you could think of but a developer can build custom ones and add them to the system (similar to IFilters or IPropertyStores).

    Beer28 wrote:
    I bet WinFS is doing the IO twice, once from the ntfs driver to the DMA to the disk, and again from WinFS to the ntfs disk device.

    I don't have proof, but I bet that's what they're doing.

    Why didn't they just wipe out NTFS completely and do a new FS with a SQL theme instead with the raw blocks?

    If it's how I described it, you're doubling your IO for nothing, it would be bloat.
    Just like when you mount an iso on ext3 on the loopback device.
  • Jim Allchin - The Longhorn Update

    Trying to catch up on a bunch of WinFS stuff in this thread:

    WinFS is a metadata store for things are currently files, but is a primary store for things that are not usually currently files (contacts, mail, appointments, tasks, schedules, annotations, etc.).

    I won't make any definitive grandiose statements about what a filesystem is or isn't, but WinFS stores files, stores regular non-file data, and stores meta-data, as those terms are commonly understood. It responds to fopen/fwrite of path based names so in my book it's a file system.

    Generally, Larry's overview is decent--it just misses the non-file stuff that ends up in WinFS via your mail application, your scheduler, your event planner app, your list manager, etc.

    WinFS does take advantage of the NT file system driver structure to provide those services. We actually store the filestreams of file typed items as streams just like in NTFS (but with some extra stuff to handle transactions). We store non-file data just like SQL Server does (well, like most like SQL as of Yukon, although we have a couple of finely tuned advancements beyond Yukon's serialization formats).

    WinFS is based on SQL Server of course, and the SQL Server code is mostly native C++. SQL also provides a hosted CLR environment for managed programming in the store. WinFS client is implemented in two halves: as a set of enhancements to the store written in C++, C# and T/SQL and as a set of specialized objects that provide the basic CRUD operations via O-R mapping, exposes a model for transactions, query (opath), business logic, query notification, cursoring, Rules composition, sync adapter API's and so on. This part is almost completely written in C#, although some bits are actually generated C# in a data-driven way from XML.  

    The first part of that (CRUD) is the part that's similar to ObjectSpaces. We have not yet figured out what the change means to the ObjectSpaces plan and we'll have to work that out as we get settled into the new logistics.

    Those of you expressed anger, sadness, anxiousness and anticipation about WinFS: thanks. We are still working very hard on this project and it means a lot to know there are customers that want us to get it done and get it done right.

    Samuel Druker

    Edit: I think scobleizer already pointed here, but just in case

View All