Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

ADO.NET Entity Framework: What. How. Why.

Download

Right click “Save as…”

I recently caught up with some of the technical minds behind ADO.NET's Entity Framework: Architect Michael Pizzo, Technical Lead Pablo Castro and Director of Program Management Britt Johnston.

What's an Entity Framework, you ask? Well, watch and learn all about this new ADO.NET development framework/paradigm. For the ADO developers out there, you'll be quite pleased with the architectural direction ADO has taken.

Enjoy.

Here are some links to relelated ADO.NET information:

Pablo’s post with links to detailed docs: http://blogs.msdn.com/adonet/archive/2006/07/11/662447.aspx

Screencast demo’ing ADO.NET vNext in action: http://blogs.msdn.com/adonet/archive/2006/07/11/662454.aspx

ADO.NET team blog: http://blogs.msdn.com/adonet/

Data Programmability team blog: http://blogs.msdn.com/data/

Britt’s first blog post (recent) sharing his thoughts on conceptual schema: http://blogs.msdn.com/data/archive/2006/07/14/665780.aspx

MSDN Data Developer Center: http://msdn.microsoft.com/data/

Tags:

Follow the Discussion

  • Me thinks it's annoying when people say cutesy crap like "me thinks".  Take responsibility and credit for your own thoughts -- say them in the first person!
  • Au contraire! Methinks is actually from formal Old English, it has been around since before Shakespearean times.
  • AlphaKahunaAlphaKahuna Customers, THE valuable asset.
    Me thinks the message of the video is not the phrase "me thinks"


    ZapperGirl
    http://sexy-technology-geeks.blogspot.com/
  • CharlesCharles Welcome Change
    OK... I've edited out "me thinks" so that you can concentrate on the meaning of the video...

    Probably best to actually watch a video before you post in a video thread. That's the preferred methodology anyway.

    C
  • PeterFPeterF Aragon IT
    Interesting stuff. Only thing that wasn't highlighted was when you actually change the database tables and how that is automatically handled.
    I would expect a seperate XML file with these mappings which you could update if things would have been changed to a point that automatic handling fails...
    Is there also another view for checking the mappings?

    Peter
  • CharlesCharles Welcome Change
    AlphaKahuna wrote:
    Me thinks the message of the video is not the phrase "me thinks"


    Correct! Smiley
    C
  • Jonathan MerriweatherCyonix Me

    Damn i want this now!

    I wish you had gone into more detail as to how it works with the database. For example: do i have to create stored procedures that i then map using that map editor thingymebob

    Or does it magically do all that for me Wink

  • Klaus EnevoldsenKlaus Enevoldsen Development has never been easier nor more ​complicated.​..
    >Damn i want this now!

    I believe that I heard somewhere that the first preview will be available some time in august. August is almost now...
  • This is majorly cool! Can't wait to get the bits.

    Couple of questions:

    1. Isn't this a complete superset of DLINQ? Is there anything DLINQ can do that you guys can't do? If that is true: Why have both? Generally, it would be nice to clear that up and let us know how this is going forward.

    2. To follow up on the "is it debuggable" question. Is it testable? I.e. do you have a story for people that do TDD? Is there good support to replace these things with mocks etc? Would be great if these things would be taken into account from the start!

    3. This is only minor, but still. What about refactoring? If I change a name in the mapping definition, will that change trickle via refactoring rename down into the places where I query in code?

    Cool stuff! David

  • >Damn i want this now!

    Now *that's* the type of feedback we like to get! (We also like to hear about missing features, implementation concerns, etc., but it's nice to know from time to time that some part of what we're doing is on track… Big Smile)

    >I wish you had gone into more detail as to how it works with the database. For example: do i have to create stored procedures that i then map using that map editor thingymebob

    While we are planning on supporting stored procedures for C(reate)U(pdate)D(elete) operations, as well as directly invoking stored procedures, these are not required for the mapping.

    The "Mapping Provider" we showed, which can be accessed directly using existing ADO.NET provider interfaces, or through ObjectServices/LINQ, uses a "Client-side View" technology, which is based on well understood database view transformations that we perform on the client machine (hence the name; clever, huh?)
     
    Updates are done in a similar way, borrowing from view maintenance techniques developed by relational stores. The result is a more composable, extensible, and provable mapping solution than the more adhoc "search & replace" techniques employed by typical ORM solutions.

    We found that, in many cases, it was impractical if not impossible to make changes to relational schemas, including extensions to existing schema or even adding stored procedures/table valued functions. By implementing views on the client, each application can have their own "view" of the data in the relational store. Of course, this doesn't prevent applications from sharing views, but gives the flexibility to have, and maintain, separate views for different clients.
     
    Note that, although the view unfolding occurs on the client, all of the actual query processing/evaluation occurs in the server, so that the data we get back from the server matches the applications conceptual view of the data.

    Okay; probably more than you wanted to know, but we're pretty jazzed about this stuff… Wink
  • Joshua RossJoshRoss Drinking Ovaltine since 2004
    Interesting... It almost reminds me of English Query.  English Query was very interesting although it made me realize that people are really bad at asking questions.
  • eddwoeddwo Wheres my head at?
    I want it now too!, literally next week would be good.

    I'm currently trying to work out a design for doing updates against entities that span about 10 tables, how to generate keys for the relationships on the fly etc.

    I'm thinking of using SQLXML to fetch the whole entity from the DB as a single query result and then construct UpdateGrams on the fly to submit the modifications back, but I've yet to work out all the details.

    You guys are going to make this sort of thing so much easier, just wish I could wait another year or so to use it.
  • This looks awesome! Managing the data access layer is one of my least favorite dev tasks.

    How are permissions handled? Our shop tends to set permissions at the proc/view level with no direct access to the tables.

    Looking forward to playing with this stuff!
  • AlphaKahunaAlphaKahuna Customers, THE valuable asset.

    I say this all the time, but I guess I haven't said it here yet:

    Charles rocks as an interviewer!

    [and he's a hottie too Wink  ]

  • CharlesCharles Welcome Change

    Thanks for the kind words, Alpha! Smiley

    C

  • >> Only thing that wasn't highlighted was when you actually change the database tables and how that is automatically handled. I would expect a seperate XML file with these mappings which you could update if things would have been changed to a point that automatic handling fails...

    Exactly, there is an XML file that contains the mapping between the logical schema of your database and the conceptual EDM schema that your application sees. If your database schema changes (which we expect to be a common thing, as regular apps evolve), you can go to the XML file and adjust the mapping. Of course, there are changes that we can compensate for in the mapping (in the XML file) and changes that we can't (either because part of the data is gone, or because the infrastructure cannot transform the new shape of the database into the existing shape in the conceptual schema).


    >> Is there also another view for checking the mappings?

    In the interest of time (the video was way too long already as you probably noticed Wink ), one of the examples used the "reverse engineer" feature that generates a conceptual schema directly from the database (i.e. a 1:1 mapping), and for the other examples I used a pre-existing mapping. I skipped showing how to actually map between the conceptual model and the database schema when needed . There is a tool that you use in order to describe the mappings between these things. If you prefer, you can also edit XML files.

    -pablo
  • >> How are permissions handled? Our shop tends to set permissions at the proc/view level with no direct access to the tables.

    At least in this version, there is no permissions system built in ADO.NET itself; of course, the security infrastructure of your backend database still applies, and that's what you should use at least for the first layer of security (depending on your app, you may or may not want a second layer of permission management in the middle tier).

    If you have only access to views/procs, then you'll have to provide the system some extra information so we know which procs map to which operations in each of the entities in the system. Also, of course, if you use procs to fetch data, you don't get free query using Entity SQL, and instead you invoke the stored-procs and get whatever are the results of the proc, mapped appropriately to fit into the conceptual schema.

    >> Looking forward to playing with this stuff!

    August...we're almost there. You'll hear the noise when we put the stuff out there Wink

    -pablo
  • stevo_stevo_ Human after all
    This looks like a great solution to what would otherwise essentially be coding a highly customized DAL and a extremely rich and well-performing BLL.

    Create area to cover for ADO, it's hit essentially one of the potential nigtmares with database structured applications (and to be honest, what aren't these days?).

    Can't wait to use this, is this an optional extend to 2.0 and 3.0 or included with 3.0? The only issues I can think are getting non-dedicated hosting solutions that support these things.
  • "O Romeo, Romeo, Wherefore art thou Romeo?"
    ---------------------
    0 Rows Returned

    Me thinks that's Old English Query, ey?

    JoshRoss wrote:
    Interesting... It almost reminds me of English Query.  English Query was very interesting although it made me realize that people are really bad at asking questions.
  • I have just finished watching the video and I have a big smile in my face. This is great stuff that could change the world! At least it will help me with my everyday work.

    I have been studying the available information on the Entity Framework in the light of an application I am working on right now. In it we do custom entities, which use inheritance, class factories with discriminating columns, etc. For persistence we use typed datasets that are more tightly (but not completely) coupled with the logical schema. It looks cleaner than the alternatives but still leave us with three schema representations (only one of which gets generated automatically). At first sight, I think the Entity Framework  would cover 80% of our needs (maybe more, I have to try it).

    I still have a few concerns, mostly about team development, logical schema change management, how to represent certain patterns we use, etc.

    All in all, my birthday comes in August. I will be waiting anxious.
  • Ang3lFir3Ang3lFir3 Codito Ergo Sum
    Pablo [MSFT] wrote:
    
    August...we're almost there. You'll hear the noise when we put the stuff out there

    -pablo


    I'm gunna hold you to that Big Smile:O:P ....

    <meladramatic>
    Every morning I wake up wondering if the new ADO.Net bits have gone CTP.... It's my driving force for living!!!
    </meladramatic>
  • First off, Pablo talks too fast and I can barely understand his accent - even if he speaks gramatically correct english.

    Second, I agree - Charles is the best interviewer by far. I can't stand Scobel. (Not that that is even a fair comparison!)

    What upset me beyond not being able to understand Pablo's overexcited hyper-speak is that the only time 'VIEWS' were mentioned it was in passing and with a 'just trust MSR'.

    Looking back and having now read a paper on it, I think I understand. I just hate watching an hour long video with people rushing through concepts, the camera going in and out of focus and someone with such a thick accent speaking so fast.

    I think there needs to be some balance between the 'live' aspect of these interviews and some kind of coherency and organization.

    BTW, that whiteboard drawing was about the worst method of explanation I've ever come across. What was that middle box supposed to represent again? And don't you think 'conceptual' and 'logical' are the wrong words? If the UI code is called 'conceptual', how is 'logic' defined for you anyway? I forgot, this is MS-speak. You guys have your own definitions and change phraseology everytime there's a new percieved market. The distinction you made seems more like a sales gimmick than a definition for a legitimate term.

    Quit the kiddie talk!

    To me, a relational database diagram IS *conceptual*!

    To me, all those advantages of entities that were talked about for the first 40 minutes of the video are used for VIEWS already.

    After reading a paper and looking at a few diagrams I can see they are being used as objects. Maybe I'll understand by the time it comes out.

    BTW, what I was searching for was along the lines of XSP:

    http://xsp.xegesis.org/

    I wonder if Jim Gray has ever heard of DL Childs.....

  • Another thing I really didn't like about the video was that data binding in WinForms was used. We never really saw what came back from these queries! We saw the queries, and we saw that the result of the queries (whatever it was) was assigned to the datasource. And some sentence along "now databinding does all the magic". That was just terrible and added to the "wow, we show you lots of magic". It would have been much better to go with a command line app, and then just loop through the results, and show how you get back strongly typed objects. And type that during the interview! It is great that you prepare some super magic code that shows stuff, but the viewer of the show will have no chance to really understand what you are doing with that sort of thing.

    Still way cool technology!
  • CharlesCharles Welcome Change
    davida242 wrote:
    Another thing I really didn't like about the video was that data binding in WinForms was used. We never really saw what came back from these queries! We saw the queries, and we saw that the result of the queries (whatever it was) was assigned to the datasource. And some sentence along "now databinding does all the magic". That was just terrible and added to the "wow, we show you lots of magic". It would have been much better to go with a command line app, and then just loop through the results, and show how you get back strongly typed objects. And type that during the interview! It is great that you prepare some super magic code that shows stuff, but the viewer of the show will have no chance to really understand what you are doing with that sort of thing.

    Still way cool technology!


    This is why we have Screencasts... I think Pablo is planning on producing some.
    C
  • Thanks for the feedback. Here are a few comments:

    christianlott wrote:
    First off, Pablo talks too fast and I can barely understand his accent - even if he speaks gramatically correct english.

    Feedback taken. I know I tend to talk fast, I just get too exited about the stuff Smiley ; also, the informal nature of C9 interviews makes it worse for me (although I think it's a good thing overall). I'll try and go slower next time.

    christianlott wrote:
    What upset me beyond not being able to understand Pablo's overexcited hyper-speak is that the only time 'VIEWS' were mentioned it was in passing and with a 'just trust MSR'.

    Looking back and having now read a paper on it, I think I understand. I just hate watching an hour long video with people rushing through concepts, the camera going in and out of focus and someone with such a thick accent speaking so fast.

    I think there needs to be some balance between the 'live' aspect of these interviews and some kind of coherency and organization.

    Good feedback. Our idea was to have a mix of documents (which you already found) and informal videos with different content; the docs would go through the details, and the video would show the take of the team. That said, if it didn't work we may need to work on that (including my accent Wink )

    christianlott wrote:
    ... don't you think 'conceptual' and 'logical' are the wrong words? If the UI code is called 'conceptual', how is 'logic' defined for you anyway? I forgot, this is MS-speak. You guys have your own definitions and change phraseology everytime there's a new percieved market. The distinction you made seems more like a sales gimmick than a definition for a legitimate term.

    Quit the kiddie talk!

    To me, a relational database diagram IS *conceptual*!

    May be we should have given more background material. The terminology we used is for the most part made of well-known industry and academia terms (there are some specifics required to describe new or specialized elements, of course).

    Chen's paper from '76 introduced a "conceptual" layer and discussed "multilevel views of data". The article below from Wikipedia has a summary of the "Entity-relationship" model, the relationship beween the conceptual, logical and physical layeres, and pointers to Chen's papers and other resources:
    http://en.wikipedia.org/wiki/Entity-relationship_model

    The Entity Data Model is an entity-relationship model and borrows terminology from it.


    -pablo
  • Charles wrote:
    
    This is why we have Screencasts...


    Yes. The screencasts were better. Thank you.

    Charles wrote:
    I think Pablo is planning on producing some.
    C


    God, I hope not.
  • Pablo [MSFT] wrote:
    Thanks for the feedback. Here are a few comments:


    Feedback taken. I know I tend to talk fast, I just get too exited about the stuff ; also, the informal nature of C9 interviews makes it worse for me (although I think it's a good thing overall). I'll try and go slower next time.

    Good feedback. Our idea was to have a mix of documents (which you already found) and informal videos with different content; the docs would go through the details, and the video would show the take of the team. That said, if it didn't work we may need to work on that (including my accent )


    May be we should have given more background material. The terminology we used is for the most part made of well-known industry and academia terms (there are some specifics required to describe new or specialized elements, of course).

    Chen's paper from '76 introduced a "conceptual" layer and discussed "multilevel views of data". The article below from Wikipedia has a summary of the "Entity-relationship" model, the relationship beween the conceptual, logical and physical layeres, and pointers to Chen's papers and other resources:
    http://en.wikipedia.org/wiki/Entity-relationship_model

    The Entity Data Model is an entity-relationship model and borrows terminology from it.


    -pablo



    Thanks Pablo. I just needed a slow down. I could understand only after I replayed what you said slowly in my head.

    I didn't mean to sound like a jerk and I'm glad you didn't take it that way (though my last post may have proven otherwise).

    What's bugging me though is that - including the screencast I just saw, they keep typing out these long sql sequences for a join. Couldn't you just refer to a view and shorten the code the same as refering to an entity? This all seems like 'views on clr'. Would that be a correct assumption?

    I know there are limitations on what you can store using a view, but those limitations are logical. Could you explain the differences with maybe an example?

    Thanks,

    Christian




  • I haven't watched the video yet, but is there anything here that is different/better/new compared to the object persistence frameworks already available (Gentle.NET, Hibernate, etc.)?
  • Pablo, where are you from? In the first video (the one that vanished) your desktop background made me think you were Mexican or Central American. But in this video when you started "proshecting" the columns I got the idea that you were Argentinean or Uruguayan. By the way, don't take a hit about your accent. We just live in a big world. I actually envy how fast you can speak in English without making mistakes Smiley

  • DiegoV wrote:
    

    Pablo, where are you from? In the first video (the one that vanished) your desktop background made me think you were Mexican or Central American. But in this video when you started "proshecting" the columns I got the idea that you were Argentinean or Uruguayan. By the way, don't take a hit about your accent. We just live in a big world. I actually envy how fast you can speak in English without making mistakes



    Good catch on "proshecting", well mapped to South America Smiley

    You got it right, I'm originally from Argentina (Buenos Aires).
  • William Staceystaceyw Before C# there was darkness...
    davida242 wrote:
    Another thing I really didn't like about the video was that data binding in WinForms was used. We never really saw what came back from these queries! We saw the queries, and we saw that the result of the queries (whatever it was) was assigned to the datasource. And some sentence along "now databinding does all the magic". That was just terrible and added to the "wow, we show you lots of magic". It would have been much better to go with a command line app, and then just loop through the results, and show how you get back strongly typed objects. And type that during the interview! It is great that you prepare some super magic code that shows stuff, but the viewer of the show will have no chance to really understand what you are doing with that sort of thing.

    Still way cool technology!


    But that is all databinding does.  It takes that strongly typed List and put the elements in the proper textbox (in simple terms).  So iterating over the list manually would not really show much more IMO.  However, IIRC, there was a console app in there too.  I felt is was done at a great level.
  • William Staceystaceyw Before C# there was darkness...
    Simply great stuff guys.  BTW - Pablo, I love your accent - good job.
    Here are some thoughts.  Probably already on feature list:

    1) UI mapper between entities and logic DB (BizTalk like)
    2) Create the DB schema from the Entity model.  Deploy a local or remote db via the XML entity schema.
    3) Client-side query tracer.  Should be easy and you know what query you sent and data bytes received.
    4) Maybe some simple perf counters on query objects. (Timespan, etc)
    5) Why couldn't Entity SQL also be a .Net language or language extention with strong typing instead of hidden inside quotes?
    6) Bidirectional Refactor.  Refactor the Entity model, and update the DB.  DBAs will hate it, but I like it.  Moreover, refactor the DB updates entity model.  Especially helpful during dev.  Naturally, this should be integrated VS for DB product.
    7) Self optimizing normalization in the logic layer.  With the abstration, we don't need to see it anyway, so the the DB could change itself and we still see the entity model the same.  Maybe V2.

    Would like to hear more on Jim Gray's object DB project.  I am glad to hear he did not give up the dream and is getting his way (as he should).  Have *all .Net types, get rid of nullable value types in the db and get rid of native sql types and while your at it - get rid of TSQL.  Allow any New query languages to be first class in the DB - all equal to TSQL, not ontop of TSQL.  Maybe xml becomes the common denominator that the DB takes.  All query languages, just send and receive xml and that is what is parsed instead of TSQL.    Now that would be a big bonus for many of us (not all).  This would be especially nice for things like SQL/E and maybe eventually sql express.  Things like Linq and Entities will be first class citizens of the db.  2-tier will become a more popular model again. 

    Great job!  Cheers
    --
    wjs

  • staceyw wrote:
    

    But that is all databinding does.  It takes that strongly typed List and put the elements in the proper textbox (in simple terms).  So iterating over the list manually would not really show much more IMO. 



    But unless you have used databinding and are familiar with it, you can't possibly know that. All I saw in these demo was that some object was given back from the query and then shown in the UI. I know that the object is a strongly typed list, but if you are not familiar with databinding, it might as well have been some weakly typed dataset sort of thing. Pablo just dropped some preconfigured thing onto the form, how is the innocent viewer supposed to know what it is? I just believe it is better to be minimalistic with these things. The great demos by Don Box and alike try to avoid these "and now I combine this with this other [insert some complex technology that not everyone is familiar with]" moments.

    Let me also add: Don't worry about the accent! Or that you are excited about the stuff. That is Channel9, right? Professional videos can appear somewhere else!
  • Pablo [MSFT] wrote:
    
    You got it right, I'm originally from Argentina (Buenos Aires).


    Pablo, it wasn't that difficult, really... I am originally from Mendoza Smiley Congratulations for your job and for your work!
  • staceyw wrote:
    2) Create the DB schema from the Entity model.  Deploy a local or remote db via the XML entity schema.


    This sounds like a great idea, even if it is a different business! A couple of thoughts:
    a. You will probably need a db specific component for this, as DDL differs. Possible candidate for Visual Studio extensibility interfaces.
    b. You will probably need extra hints on the EDM to go from entities to a normalized database.

    staceyw wrote:

    Have *all .Net types, get rid of nullable value types in the db and get rid of native sql types and while your at it - get rid of TSQL. 


    Hmmm, not a great idea IMO. It must be only me, but I love nullable types. I was tempted to ask for pervasive support for nullable types in the Entity Framework instead. I think I can understand your pain (having to deal with uncercertainty), but what do you think is the alternative? In a relational database, a nullable column is just a shortcut for not creating extra tables. How can you get rid of them without creating more clutter?

    staceyw wrote:

    Allow any New query languages to be first class in the DB - all equal to TSQL, not ontop of TSQL.  Maybe xml becomes the common denominator that the DB takes.  All query languages, just send and receive xml and that is what is parsed instead of TSQL. ... Things like Linq and Entities will be first class citizens of the db.  2-tier will become a more popular model again. 


    Not sure about XML and getting rid of TSQL, but it is true that now SQL is not the only language that can express queries. I guess more direct support for IQueryable, expression trees, and even entities inside the engine could be comming in future versionis of SQL Server.
  • Tommy CarlierTommyCarlier Widen your gaze
    AlphaKahuna wrote:
    I say this all the time, but I guess I haven't said it here yet:

    Charles rocks as an interviewer!

    [and he's a hottie too Wink  ]

    Sorry to go off-topic, but I feel a bit discriminated.

    There were different posts a while ago where a woman was interviewed, and someone made a remark that she was good looking, and people were shocked and called him a sexist, and felt that he didn't respect the woman for her capabilities and talents. And now Alpha calls Charles a hottie, and the only response she gets is 'Thanks for the kind words'?

    What if I said 'Charles is a hottie'? Would I also get a 'Thanks for the kind words'?

    I'm not trying to be a jërk, but I'm just curious if someone can explain me this.

  • William Staceystaceyw Before C# there was darkness...
    TommyCarlier wrote:
    
    AlphaKahuna wrote: I say this all the time, but I guess I haven't said it here yet:

    Charles rocks as an interviewer!

    [and he's a hottie too Wink  ]

    Sorry to go off-topic, but I feel a bit discriminated.

    There were different posts a while ago where a woman was interviewed, and someone made a remark that she was good looking, and people were shocked and called him a sexist, and felt that he didn't respect the woman for her capabilities and talents. And now Alpha calls Charles a hottie, and the only response she gets is 'Thanks for the kind words'?

    What if I said 'Charles is a hottie'? Would I also get a 'Thanks for the kind words'?

    I'm not trying to be a jërk, but I'm just curious if someone can explain me this.



    Ok.  Based on your avatar, your a hottie too!
    Smiley  I'm a hottie, your a hottie, wouldn't like to be a hottie too?
    Sorry, I just got up.  Going to the lake today with the jet ski.  Have a good week end folks.  Cheers!
    --
    wjs
  • William Staceystaceyw Before C# there was darkness...
    Hmmm, not a great idea IMO. It must be only me, but I love nullable types. I was tempted to ask for pervasive support for nullable types in the Entity Framework instead. I think I can understand your pain (having to deal with uncercertainty), but what do you think is the alternative? In a relational database, a nullable column is just a shortcut for not creating extra tables. How can you get rid of them without creating more clutter?

    I agree it sounds a bit radical and it is.  However, IIRC, the first thing Jim Gray does with a new table is uncheck null on all fields.  That is what I do too.  In .Net 1.1, I think I actually needed a nullable once, and just worked around it.  OK, so you don't have to remove, but I think with native .Net types in the db, the cases where you actually will *need null will be lower.


    Not sure about XML and getting rid of TSQL, but it is true that now SQL is not the only language that can express queries. I guess more direct support for IQueryable, expression trees, and even entities inside the engine could be comming in future versionis of SQL Server.

    Maybe so.  But is it easier to parse xml or a language?  How about Binary XML?  Not sure, I would think xml.  So if xml was the common query input into the engine, then any language would just output to the common xml standard and be done.  The engine would not care, or know, what language was used to produce the query.  The engine would take xml or binary xml (and tsql as no need to actually remove it - i was putting punch on the idea).   Naturally, this makes the db engine language agnostic.  It also means you could go with 1 entry point:

        public Results ProcessCmd(string xml)
        {
             // do query/job/command/etc
             return results;
        }

    Maybe, I just need a drink.  Thanks again guys.  Looks great.  Cheers.
    --
    wjs
  • CharlesCharles Welcome Change
    TommyCarlier wrote:
    
    AlphaKahuna wrote: I say this all the time, but I guess I haven't said it here yet:

    Charles rocks as an interviewer!

    [and he's a hottie too Wink  ]

    Sorry to go off-topic, but I feel a bit discriminated.

    There were different posts a while ago where a woman was interviewed, and someone made a remark that she was good looking, and people were shocked and called him a sexist, and felt that he didn't respect the woman for her capabilities and talents. And now Alpha calls Charles a hottie, and the only response she gets is 'Thanks for the kind words'?

    What if I said 'Charles is a hottie'? Would I also get a 'Thanks for the kind words'?

    I'm not trying to be a jërk, but I'm just curious if someone can explain me this.



    I was referring to the "Great interviewer" comment. At any rate, compliments require thanks, no matter who they come from. No discrimination here.
  • staceyw wrote:
    Simply great stuff guys.  BTW - Pablo, I love your accent - good job.
    Here are some thoughts.  Probably already on feature list:

    1) UI mapper between entities and logic DB (BizTalk like)


    Yes, yes, of course. We have a prototype now, and we'll have something later on so you don't have to do everything in the XML files. (in my experience, some things are easier witht the visual tool, some are easier with xml files, so it's good to have both). The visual mapping tool won't be there in the CTP, but we have a team of folks working hard on them.

    staceyw wrote:
    2) Create the DB schema from the Entity model.  Deploy a local or remote db via the XML entity schema.


    We're still working on the scope of our tools effort. We're not planning on doing this one right now, but I'll take this feedback. Also, there is a good oportunity for 3rd party tools here Wink

    staceyw wrote:
    3) Client-side query tracer.  Should be easy and you know what query you sent and data bytes received.

    4) Maybe some simple perf counters on query objects. (Timespan, etc)


    We'll have a supportability story to help troubleshoot the system. That includes tracing, but we don't have perf counters in the plans. The details are still sketchy and it'll get clearer later on (note that the Whidbey providers are already instrumented, so you could do this (client tracing) today at the provider level).

    staceyw wrote:
    5) Why couldn't Entity SQL also be a .Net language or language extention with strong typing instead of hidden inside quotes?


    well, there is LINQ, and we fully support it in the Entity Framework Smiley

    staceyw wrote:
    6) Bidirectional Refactor.  Refactor the Entity model, and update the DB.  DBAs will hate it, but I like it.  Moreover, refactor the DB updates entity model.  Especially helpful during dev.  Naturally, this should be integrated VS for DB product.

    7) Self optimizing normalization in the logic layer.  With the abstration, we don't need to see it anyway, so the the DB could change itself and we still see the entity model the same.  Maybe V2.


    This may be something that tools may address some day, either our tools or 3rd party's. There is the issue of "it's not polite to party on somebody else's schema" Smiley, as you said as a DBA you wouldn't be happy with this feature.

    staceyw wrote:
    ...get rid of TSQL.  Allow any New query languages to be first class in the DB - all equal to TSQL, not ontop of TSQL.  Maybe xml becomes the common denominator that the DB takes.  All query languages, just send and receive xml and that is what is parsed instead of TSQL.   


    The system already has a unified representation for commands, although it's a data structure (sort of a logical query tree), not XML. Currently we heavily rely on this throughout the stack, and it's also used for integration with providers. Whether this will expand further as time goes is not certain yet.

    May be one day I'll do a "deep dive" episode to explain how the system works internally, how command trees are used, etc.

    -pablo
  • I really like the direction that ADO.Net seems to be taking. I find it much more natural way of blending data access and object oriented design together. I've had a year of dabbling with Hibernate3 and now the new JPA frameworks and personally I'm interested in some issues. First, you always show cases where the DB exists before the app. Instead of this data driven approach will there be a clear model/domain driven approach where we write our entities ourselves? If so what will the ways to express these relationships be, attributes, xml, reflection, other? How are transactions handled? Will there be a rich exception model? Can entities be lazily fetched and how to reattach them to fetch children if it's in another domain? Can we generate and update the schema directly from the model?
    As another reader mentioned it would be nice to do a comparison with these developments in ADO.Net and Hibernate3/JPA/Gentile.net etc.
    I really really like the LINQ integration, I want it yesterday!
  • schrepfler wrote:
    ...you always show cases where the DB exists before the app. Instead of this data driven approach will there be a clear model/domain driven approach where we write our entities ourselves? If so what will the ways to express these relationships be, attributes, xml, reflection, other?


    Yes, we'll have various options, some in the version we're working now, some on future versions, and yet some will be supported but may be will require tools from 3rd parties or the community.

    Specifically, you can:

    - Reverse engineer a schema from a database; that's what I did in the first example, and it's handy to get a starting point to either code against it directly or start modifying the model from there.

    - Create a model describing your entities, your relationships, etc. in the model designer or in XML, and then describe how the various elements of the model map back to your database schema. For mapping you can use the tools or XML files.

    Other options will probably come later.

    Once you have a model (regardless of whether it was hand-written or generated from a database) you can fully explore the model using our metadata APIs.

    NOTE: visual tools won't be included in the August CTP, so you'll have to do this with the XML files, but we WILL include the option to reverse engineer a database do you have a starting point.

    schrepfler wrote:
    How are transactions handled?


    The short answer is that we're integrating the system with System.Transactions for transaction management. We also do automatic transactions for update processing.

    I'm finishing off some of pending details about transactions. Once I have all the details I'll post it somewhere so you guys can chime in.

    Watch http://blogs.msdn.com/adonet

    schrepfler wrote:
    Will there be a rich exception model?


    There will be an exception model...I don't know what's the bar for calling it "rich" Smiley - we'll do a CTP in August, I'd love to hear your feedback about error handling in general if you look at the bits.
     
    schrepfler wrote:
    Can entities be lazily fetched and how to reattach them to fetch children if it's in another domain?


    Yes, you can fetch entities lazily, but you have to do it explicitly. I'll write up a discussion about this in the next week or so to get some opinions on the specifics.

    Re-attaching...we're thining about this. I think that we have a good plan, it won't be in the CTP but once it's solid we'll make it public to gather feedback.

    schrepfler wrote:
    Can we generate and update the schema directly from the model?


    We aren't planning to include this functionality in the initial release of the Entity Framework. It's something we could do in a future version, or may be the community picks it up and does a nice tool Smiley


    Anyway, thanks for sending thoughts and questions, keep the feedback coming!

    -pablo
  • Well, although I like the xml approach (it's least invading) I can't help but notice that the java world passed from xml to annotations (which might be also a limitation, java doesn't have partial classes so there can be one view to a model or else they'd need to copy the code that would lead to more mantainence). As far as the exceptions model the only concrete example I know of is in the spring framework where they have their own exception hieararchy and they provide a way to translate the concrete vendor's exception (and it's amazing how many orm's they support).
    Although this might get outside the scope of this discussion spring also offers some interesting things like wrapping the DAO's in a proxy and declaratively assigning transaction poincuts using xml and their AOP magic. I'm just experimenting with this right now and seems very powerfull. On one side I can test my code outside of my container but I'm loosing my exceptions so I'll have to explore more these concepts.
    Anyhow, I don't wan't to be misunderstood. I love .net and if I mention these stuff is because I'd like it to have best of both worlds not because I love java more.
  • William Staceystaceyw Before C# there was darkness...
    Pablo [MSFT] wrote:
    

    May be one day I'll do a "deep dive" episode to explain how the system works internally, how command trees are used, etc.

    -pablo


    That would be great!!.  Hope you can find the time at some point.   Thanks.
  • schrepfler wrote:
    Well, although I like the xml approach (it's least invading) I can't help but notice that the java world passed from xml to annotations (which might be also a limitation, java doesn't have partial classes so there can be one view to a model or else they'd need to copy the code that would lead to more mantainence).


    Yep, we're aware of that, and we're actually considering supporting attributes as well, although there is no firm plan yet.

    Note that although Java folks introduced support for attributes, their adoption is not necessarily great. I remember sitting in a talk (I think it was on new EJB 3.0 stuff) in JavaOne a couple of years ago and when the speaker did a show-hands for who'd use attributes over xml files, it was like a 9-to-1 deal, with most folks preferring xml files (or may be more accurately, external metadata).

    schrepfler wrote:
    As far as the exceptions model the only concrete example I know of is in the spring framework where they have their own exception hieararchy and they provide a way to translate the concrete vendor's exception (and it's amazing how many orm's they support).

    We have some generic exceptions, but I do expect that some provider-specific exceptions will show up, at least for this release (which means that we won't be able to change that as a default behavior because it would be a breaking change...).

    You're right that there are some frameoworks out there that have a normalized exception hierarchy (Hibernate 3.0 had that as new of the big new features IIRC).

    -pablo
  • ebdrupebdrup Ebdrup
    Great stuff!
    I would really like to see more on how you create the actual Entity mappings, when will the beta be availabel for download and when will this ship?
  • As I mentioned before, one of my great concerns is how team development will look like with the Entity Framework. I took some time to detail my thoughts:

    First, many real life projects are partitioned in modules, so their data layers are partitioned likewise.

    Often, there are sets of tables that are used exclusively in each module, and a set of tables that are common to all. Yet, there are some tables that are resued in more than on application (typical examples are security, navigation, etc).

    Besides, building a useful data layer is not done in one step nor does it take a single day. It is more often an evolutionary and error-prone process in which a programmer “imports” objects from the database each time he/she realizes they are mentioned in the specification.

    During this process, errors that affect maintainability (duplications, improper use of naming standards, etc.) are very usual.

    So, here is a short list of features that I would like to see in the Entity Framework (some are actually hard requirements). Of course I ignore if any of these are already included:

    1. Partitioning of the conceptual model in multiple files and assemblies.
    2. Referencing and extending (entity inheritance) between entities defined in separate files and assemblies.
    3. Creating reusable “libraries” containing entities and mappings that can be reused by different modules or different applications.
    4. "Incremental" reverse engineering of databases (I think this one is already in the graphical design tool). 
    5. Support for basic refactorings (unification, replacement, renaming, etc).
    6. Very readable and maintainable XML (it should be easy to merge two files with a source code comparison tool).
    7. Efficient and easy serialization of entities and entity sets outside the database.
    8. Separation of the conceptual model from the persistence logic (take a look at what Steve Lasker does with typed datasets).
    9. A migration tool for typed datasets XSDs. 
    10. A degree of resiliency to some schema changes.

    If you can answer any of my doubts, I will be grateful. My boss is pressing me to evaluate O/RM products, and I am telling him to way for your framework everyday Wink

  • DiegoV wrote:
    As I mentioned before, one of my great concerns is how team development will look like with the Entity Framework. I took some time to detail my thoughts:

    First, many real life projects are partitioned in modules, so their data layers are partitioned likewise.

    Often, there are sets of tables that are used exclusively in each module, and a set of tables that are common to all. Yet, there are some tables that are resued in more than on application (typical examples are security, navigation, etc).

    Besides, building a useful data layer is not done in one step nor does it take a single day. It is more often an evolutionary and error-prone process in which a programmer “imports” objects from the database each time he/she realizes they are mentioned in the specification.

    Good point. We do have some modularization mechanisms (more details inline with your questions), but I think you put it in interesting terms, that is a good way of thinking about how metadata is organized and deployed.

    I have included some comments below on the specifics. Of course, as in any software product in development, things are subject to change Wink


    DiegoV wrote:
    
    1. Partitioning of the conceptual model in multiple files and assemblies.
    2. Referencing and extending (entity inheritance) between entities defined in separate files and assemblies.
    3. Creating reusable “libraries” containing entities and mappings that can be reused by different modules or different applications.
    4. "Incremental" reverse engineering of databases (I think this one is already in the graphical design tool). 
    5. Support for basic refactorings (unification, replacement, renaming, etc).
    6. Very readable and maintainable XML (it should be easy to merge two files with a source code comparison tool).
    7. Efficient and easy serialization of entities and entity sets outside the database.
    8. Separation of the conceptual model from the persistence logic (take a look at what Steve Lasker does with typed datasets).
    9. A migration tool for typed datasets XSDs. 
    10. A degree of resiliency to some schema changes.



    1. Yes, you can partition the model in multiple files

    2. Yes, you should be able to do this (although some glitch here or there may complicate things)

    3. Yep (you may need to deploy a library + metadata)

    4. We currently don't have plans for automated incremental reverse engineering. Currently we do "one shot" reverse engineer and then you can maintain the resulting model by hand. Is that something you could live with for the initial release?

    5. While you can re-factor the model (and we'll propagate the changes to your object model in CLR classes), we won't automatically propagate the changes through the mapping, at least not this time...

    6. "very readable"...well, it's XML, so you can read it Smiley; in my experience, in most cases you can design "good looking" XML that works well for small/medium data-sets, but as the amount of data you need to represet grows, things get tricky regardless of the actual schema; there are other aspects that need to be considered and balanced, such as the evolution of the schema across versions of the framework and making sure there are no ambiguities. That said, we are looking at making sure the XML is relatively clean.

    7. Our plan is to have a mechanism to enable shipping of entities across tiers and allow for the system state to be reconstructed later on, however, that doesn't not include taking care fo serialization itself. We assume that you'd use any of the already-existing serialization infrastructures.

    8. Following the typed-table pattern, what you're saying is that you'd like the option to have the "typed ObjectContext" in one assembly and the domain classes in another one, is that right?

    9. We don't currently have one planned, but hey, we do have a developer community that might be interested in contributing a few of these nice tools Smiley

    10. The mapping infrastructure does provide a good degree of isolation from schema changes for the applications built on top of a conceptual model. This requires that you manually update the mappings to map to the new schema, but other than the map everything else should go untouched (of course, there are certain types of changes that we cannot compensate for).


    Hope this helps clarify some of the issues. This provided me with good perspectives on certain problems, thanks for the write up.

    -pablo

  • ebdrup wrote:
    Great stuff!
    I would really like to see more on how you create the actual Entity mappings, when will the beta be availabel for download and when will this ship?


    We're planning on doing "something" (screecast, video or something else) to talk about the model and how it maps.


    Regarding availability of bits, we're shooting for a CTP in August.

    -pablo
  • Pablo [MSFT] wrote:
    

    5. While you can re-factor the model (and we'll propagate the changes to your object model in CLR classes), we won't automatically propagate the changes through the mapping, at least not this time...



    Not propagating a rename into the DB (that is what you mean by "through the mapping", right?) seems perfectly fine! Database refactorings are a complicated class of things by themself, as you have to take care of the change scripts that need to deployed etc.

    What would be nice is if a rename of a database object in the new VS Team Database role would propagate into the mapping file Smiley Just into it, not through it, actually.

    Also, when you say that renames will propagate into the CLR objects. Does that mean you will use the rename refactoring code that is in VS to do that? That would be incredibly cool! If  I change the name of a property in my entity, and all C# code that references that property in code (i.e. not the class that represents the entity, but the code that uses that class) would update automatically.
  • davida242 wrote:
    
    Pablo [MSFT] wrote: 

    5. While you can re-factor the model (and we'll propagate the changes to your object model in CLR classes), we won't automatically propagate the changes through the mapping, at least not this time...



    Not propagating a rename into the DB (that is what you mean by "through the mapping", right?) seems perfectly fine! Database refactorings are a complicated class of things by themself, as you have to take care of the change scripts that need to deployed etc.

    What would be nice is if a rename of a database object in the new VS Team Database role would propagate into the mapping file Just into it, not through it, actually.

    Also, when you say that renames will propagate into the CLR objects. Does that mean you will use the rename refactoring code that is in VS to do that? That would be incredibly cool! If  I change the name of a property in my entity, and all C# code that references that property in code (i.e. not the class that represents the entity, but the code that uses that class) would update automatically.


    We talk with the VS team database folks often; it's not gonna happen now, but it's reasonable to think of some integration there as you point out, we'll see how things go Smiley

    Regarding renames into the CLR objects: no, we don't "refactor" the types, we re-generate them. When you design a model using the EDM schema designer or your favorite XML editor, once you're done we generate the CLR classes that represent each of the entities for you. The generated code consists of partial classes, so you can add your own stuff in a separate file, which means that we can simply re-gen the the types whenever we see a new schema, without worring about overrwriting customizations to the classes.

    -pablo
  • Well, it seems object spaces isn't dead afterall. Big Smile

    Anyway... do you see this framework simply as a DAL tier feature? Are the entities that are generated basically a container for metadata? Or, can the entities be used as business objects? In other words, can the entities contain business rules or are they more like data transfer objects (DTO's) that have no encapsulated code.
  • Gosh this is a painful video to watch -

    And not because of the usual poor quality of the video - this time it is the constant talking from the protagonist to not say anything but blahblahblah (verbal diaria).
    The entity concept is very simple to understand and explain - gosh everyone knows what it is (and if they dont - they are not watching your video...). Things that are well understood can be explained easily - I am now worried of what these guys have come up with in this framework - the basic idea expressed in this video sound correct - but since they took 1 hour to explain a simple concept that should take 5 minutes...
    who knows... (LIVE AND LEARN)

    To answer your question:
    Hopefuly this should be able to replace the BO layer and the DAL - in fact it would be best if it would create the DAL code inside these BOs inside partial classes. This way we would be able to add BO functionality easily.

  • Pablo,

    I apologize for being so late with an answer. I repeat my original 10 points here for clarity:
    1. Partitioning of the conceptual model in multiple files and assemblies.
    2. Referencing and extending (entity inheritance) between entities defined in separate files and assemblies.
    3. Creating reusable “libraries” containing entities and mappings that can be reused by different modules or different applications.
    4. "Incremental" reverse engineering of databases (I think this one is already in the graphical design tool). 
    5. Support for basic refactorings (unification, replacement, renaming, etc).
    6. Very readable and maintainable XML (it should be easy to merge two files with a source code comparison tool).
    7. Efficient and easy serialization of entities and entity sets outside the database.
    8. Separation of the conceptual model from the persistence logic (take a look at what Steve Lasker does with typed datasets).
    9. A migration tool for typed datasets XSDs. 
    10. A degree of resiliency to some schema changes.

    Pablo [MSFT] wrote:
    
    1. Yes, you can partition the model in multiple files

    Great Smiley

    Pablo [MSFT] wrote:
    
    2. Yes, you should be able to do this (although some glitch here or there may complicate things)

    Hmm... I have to see the bits to understand. See, my birthday is in 8 days Smiley

    Pablo [MSFT] wrote:
    
    3. Yep (you may need to deploy a library + metadata)

    Seems very reasonable, but just in case, would it make sense to encrypt the metadata in some deployment scenarios?

    Pablo [MSFT] wrote:
    
    4. We currently don't have plans for automated incremental reverse engineering. Currently we do "one shot" reverse engineer and then you can maintain the resulting model by hand. Is that something you could live with for the initial release?

    This is kind of a shame. I could live with it, but then I can also live without the Entity Framework. I mean, it would make a real difference since in many projects, the data model is evolving rapidly at the same time the application is being developed. There is usually a tight feedback loop there. Also, things can get very hairy when a slight change to the database makes the data layer fail (but then, as you said, you can make the framework more resilient to schema changes than the current technology).

    Pablo [MSFT] wrote:
    
    5. While you can re-factor the model (and we'll propagate the changes to your object model in CLR classes), we won't automatically propagate the changes through the mapping, at least not this time...

    This is clearly a "nice to have" only. If you make the XML very clean (as in point 6), it will always be possible to do it by hand.

    Pablo [MSFT] wrote:
    
    6. "very readable"...well, it's XML, so you can read it ; in my experience, in most cases you can design "good looking" XML that works well for small/medium data-sets, but as the amount of data you need to represet grows, things get tricky regardless of the actual schema; there are other aspects that need to be considered and balanced, such as the evolution of the schema across versions of the framework and making sure there are no ambiguities. That said, we are looking at making sure the XML is relatively clean.

    Good. Just to add an example of why I want the XML to be clean: I too often have to merge two typed datasets by placing both XSD files on a source code comparison tool (Beyond Compare rocks!). There are two noticeably different parts in those XSD. The first part, surrounded by the DataSource element, is where the table adapters live. There the XML is very clean and it is generally easy to work with. The second part, where the actual table types and constrains live, is very verbose and unreadeable for me. Something what really gets in my way is how the design tool arbitrarily reorders some attributes and sibling elements when the XSDs are edited in different computers. This usually makes my work too hard. So, this is I guess, something I would like you to avoid.

    Pablo [MSFT] wrote:
    
    7. Our plan is to have a mechanism to enable shipping of entities across tiers and allow for the system state to be reconstructed later on, however, that doesn't not include taking care fo serialization itself. We assume that you'd use any of the already-existing serialization infrastructures.

    Sure!

    Pablo [MSFT] wrote:
    
    8. Following the typed-table pattern, what you're saying is that you'd like the option to have the "typed ObjectContext" in one assembly and the domain classes in another one, is that right?

    Well, yes, what I would like is to be able both to "ship" the domain classes to different logical tiers (as in point 7) but also to deploy an assembly containing the domain classes to a different physical tier (possibly to a computer that we don't fully trust) without having to give up either the underlying data model, nor the whole persistence logic, nor (god forgive us!) any information about the connection the ObjectContext will hold.  

    Pablo [MSFT] wrote:
    
    9. We don't currently have one planned, but hey, we do have a developer community that might be interested in contributing a few of these nice tools

    Good! I don't think I am worthy of it Smiley But just in case, make sure your feature set is a superset of those of Typed DataSets Smiley. I am really interested in this seeing the light as well as a tool that would create a database schema from an EDM schema (that someone else already suggested).

    Pablo [MSFT] wrote:
    
    10. The mapping infrastructure does provide a good degree of isolation from schema changes for the applications built on top of a conceptual model. This requires that you manually update the mappings to map to the new schema, but other than the map everything else should go untouched (of course, there are certain types of changes that we cannot compensate for).

    Sounds fine so far.

    Pablo [MSFT] wrote:
    
    Hope this helps clarify some of the issues. This provided me with good perspectives on certain problems, thanks for the write up.

    -pablo

    Wow! Pablo, I am grateful for the chance to give you my feedback and proud that your value it like this.

    Happy shipping of the CTP!

  • Great job!! Big Smile
    I'm waiting this for years!!

    Congratulations
  • The additional screencast is really helpful. The whole stuff is very impressive.

    I guess the EDM designer shown in the video is part of an upcoming CTP.

    But, really great stuff...

    Peter

    PS: I didn't had any problems understanding you (may be its because I am not a native english speaker or I was just listening more closely)

  • I have downloaded the ADO .NET vNext August 2006 CTP, and it's amazing! Big Smile But what bothers me is the diagram you've showed in the video.
    On the documents, it said that there's no EDM diagram and I have to edit it myself. This is okay, but I can't find the real definition of these models, especially when I look at the XML source generated, the msl, csdl, and ssdl files.

    My points are, (maybe it's been discussed in the previous post):

    • Is there any clear definition of these files?
    • When can I get the diagramming tools of EDM on the video? It seems like showing a kid a candy with a promised chocolate inside and it looks like yummy, and then the real candy given has no chocolate fill at all... Sad
    • What about real data and object mapping for other database (other than MS SQL Server)? Talking about mapping means also accepting other kinds of database, just like Access or even Oracle. The integration with LINQ? I just see a code sample, but I still don't understand how deep is the integration with LINQ, alhough I have to install LINQ first before I install ADO .NET vNext CTP.
    • Then since ADO in .NET 2.0 has a cool features such as Data Provider and the use of Factory pattern, how does this fit in the ADO 3.0? Perplexed There are some people still want to be able to control at this level of defining Provider and also try to create their own Adapter but want to be in accordance to ADO factory model. I hope I won't see another architectural shift.
    Thanks for the video, anyway! But I'm afraid I won't encourage downloading ADO vNext to my friends, since the diagram isn't available yet, and I would like to get detailed information about this.


  • I think this is same idea like ATG sql repository
  • I'm a big fan of this approach.

    Will there be support for cross database and cross server joins.

    I.E Modelling relationships between 2 entitities that can have there locations changed relative to each other.

    e.g.

    Can it handle

    DB1..Table1 to DB2..Table2 in same server

    or

    Server1.DB1..Table1 to Server2.DB2..Table2

    The SQL for the above scenarios will vary based on where you are connected and what the linked server names are etc.

    is this supported, or will it be in the future.

    Thanks and Regards
    Pete



  • I think this is all interesting, very cool work and I bet a lot of hours were spent developing, BRAVO!

    The thing is though, I am still not sure I agree with what an Entity actual is trying to do. The thing is, I am not sure that we are trying to model real objects in the sense as they are individual real things.

    First of all, we can't help but ask a few questions, for example, are we trying to model one single entity, or are we actually modeling a pattern match result of entities. What I mean by this is: A true entity defenition, would only work for a specific entity. In fact, I argue that we do not actually model entities even in our head, in the sense of real objects.

    Objects go through changes all the time, on a small scale and a on a large scale, this means that even though I look at a pop can as that single pop can, it may not actual have the same structure on all levels, for example it may have a dent on it.

    However, it is still the same can, why? Because my mental pattern matching routines have not determined it is different enough to appear "like something else". However if you throw that pop can in a series of pop cans, I might or might not be able to find it again, however I might have been lucky and noticed the dent, and if there was a reason to find it again, perhaps no other can has a dent, and therefore I have hopefully found my pop can. But you might argue that all pop cans are a specific entity, in the sense that there is nothing we would perform, verb wise, on that pop can any differently. However this may not be true, I may want to perform an action on a specific can, only based upon the dent.

    This is a completely arbitrary grouping is what i'm getting at. It seems more and more people want to make everyone feel and believe the same way they do. It would be truelly wonderful if Love, Hate and Fear were entities for which we could minpulate in our souls and other's, and maybe then we could come up with an automated way to gauge it.

    It sounds to me like, the problem is being masked here, are we saying developers cannot create the right joins and statements, or is it just too expensive, and Microsoft dosen't want people to have to think about the possibility that Entity Defenitions may break and the flexability of Joins serve their very valuable purpose in relationship modeling.



  • Before I even read about Entity Framework, I was building a class diagram in Visual Studio 2005, that was modeling a specific problem I was having.

    I wanted a very simple way to store configuration in multiple data stores, yet provide the flexibility that if one of my "Configuration" objects changed where it was stored(for example if SQL was down, and it needed to pull from a cached XML store.) (Don't ask me why, I was being a bit philosophical when doing this.) I came up with the conclusion that I needed some standard way to deal with data storage.

    I had all sorts of fun components, everything from you probably guessed, "Mapping Interfaces" to data type conversation mapping to field name mapping, all sorts of wonderful, "What if something changes, how can I not have to recompile myapp.."

    Then I found out about the Entity framework and I thought, yes, I bet Microsoft would eventually decide to go in this kind of direction. It really does simplify the development when your not dealing with specifics.

    That rule there, dealing with something in a simpler more abstract way? So instead of dealing with bits, you deal with bytes, instead of dealing with Bytes you deal with data types, instead of dealing with data types by themselves, you can reference an object type.....

    Humm, something is going on here, I realized that mapping in general is a fundamental feature which can allow the left hand side of the problem to talk to the right hand side, without worrying about how the right side deals with communicating back. This is wonderful for read operations, because it does not require any talk to the right on how it should deal with things.

    However when it comes to modifying something on the right hand side, there needs to be tools for the left hand side, to understand the procedures for which the right side will perform operations, maintaining some expectation on the left hand side for consistency.

    In other words, abstraction tells the left side, less and less about what is going on in the right side, vice versa, this is great for being secretive or being less complex, however it makes it harder and harder to guess what? Optimize the right side.

    This is one of the reasons we look deep into things in real life, at a small scale, with tools that let us look past the abstractions, at some point, I don't know when specifically, you can no longer change the world when you cannot realign the structures which make the abstraction possible.

    However, I will admit that I would not want to have to deal with the specifics of atomic structure of an orange every time I wanted to eat one J

     

  • It looks interesting. There are some specific problems I'd like to see solved though. Most serious is below:

    A big pain we have at the moment is keeping knowledge in as few places as possible in order to make maintenance easier and creation easier. Knowledge that most suffers are things like data types. Knowledge that this field is a date or a string limited to 32 upper case characters that cannot take blanks. Maybe even things such as more complex validation, a regular expression match, an email address.

    At the moment that knowledge has to be coded into every form or field that the data appears in. It's easy for the programmer to forget - what's a missing property between people? - and if you change it it's easy to miss places where it's used. Also in Dot-Net-2 it seems hard to get basic support for this type information to go into things like DataGridView. I bind to a Date and I get a Text Box Editor that lets me enter non-date characters. The Cancel action in the Validating event is not user friendly and seems to give no chance for good feedback - or I'm getting it wrong.

    An entity model seems like a wonderful thing to get this knowledge coded in the one place where it is really known - the conceptual model of the system. If that model carries through the code then it should be possible that when you bind to it the knowledge is used. The form controls should automatically pick up types validation, and ideally there should be an easy to use mechanism for inserting more complec business validation into the model.

    I know this gets into arguments we've seen in the modelling world of where the business logic belongs - intelligent model or intelligent middle tier. Some things though are so basic they should be there in the model. There are there in the model when it hits the database. It should be easy to carry this to the front end and the user.

     - Richard

  • Whats up with this video? I an not view it. When i first tried viewing iti thought i had a problem because of firefox so i switched to ie7 with no luck. I keep getting "windows media player cannont find the file" error. I even tried to view it on another machine (my main machine is vista x64) behind my network which is a Windows 2003 Server R2 with no luck. I have a really fast cable connection. Anyone have any idea whats going on. I am very interested in teh entity framework and watching a video would be a good way to get started.

    I forgot to say i have viewed a variety of other videos on channel9 with no problems at all.
  • OK this makes no freakn sense. I can not watch this video at home no matter what computer i try. I tried it at work and i was able to start watching it but unfortunatly at work i don't have the time. What is strange is i can watch other videos on channel9 like for example:
    mms://wm.microsoft.com/ms/msnse/0607/28366/CraigMundie_Final_MBR.wmv
    Anyone have a clue?

  • CAn any body tell me how to DOwnload this video..
    It is not available for download...
  • I had very little knowledge of EF prior to this video, and now I feel that I have a good enough handle on it to begin playing around!  I think this is an incredibly useful tool, and I look forward to using it in my upcoming projects. 

    From what I can see now, great work!

    PS: I think that MSFT really needs to pool their resources and give Pablo a better monitor so he can keep doing such a great job without slowly going blind!
  • Jesús Boschjbosch Jesús Bosch

    It's difficult to follow the recorded screen... a webcast or screencast would be much better

  • lloydlloyd

    nice video guys!!!

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.