Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

Pablo [MSFT] Pablo [MSFT]
  • Pablo Castro, Britt Johnston, Michael Pizzo: ADO.NET Entity Framework - One Year Later

    Bydia wrote:
    Where can I get the WebDataService Template that I saw in the video on Channel9?


    That template is part of Project Astoria, and we have not released a beta 2-compatible version yet (we will very soon).

    Bydia wrote:
    Also,  I installed everything into a clean install in a VPC with all hotfixes.  I typed everything like in the demo but I got an error with the following line:
    using (NorthwindEntities db = NorthwindEntities())

    the second NorthwindEntities has error when compiled as:
    'NorthwindModel.NorthwindEntities' is  'type' but is used like a 'variable'.

    Right, that's what we want no?


    You are missing a "new" keyword after the "=" symbol, so the compiler is getting confused about the meaning of the statement.
  • Pablo Castro, Britt Johnston, Michael Pizzo: ADO.NET Entity Framework - One Year Later

    hamper wrote:
    

    Since you are using Norhtwind I guess the designer doesn’t support schemas in the database as the other visual studio designers and wizards in 2005 and 2008!!!
    Maybe we should ban Northwind and Pubs in demos and use Adventureworks.



    We actually now fully support databases that use owning schemas to group tables (we didn't back a year ago when we did the original Channel 9 video that's referenced by this one).

    You can point the EDM designer to AdventureWorks and we'll automatically generate a fully-functional EDM schema and default mapping for it. (to be very specific, we generate 1 warning from the whole process where we let you know that there is a table with a column with more numeric precision than what the CLR can represent with its "decimal" type and thus we skipped it).

    We use Northwind just for demos and just because it's a small schema...it's definitely not our reference test database Smiley
  • Pablo Castro: Astoria Data Services

    JoshRoss wrote:
    I would imagine that you would need a service that produced and consumed one-time-use tokens. [EDIT] On second thought, row level permissions would work just fine. If it wasn't supported by the database, you could roll your own solution. Each webUser could be in a table with a permission +C+R+U-D say, a table name, and a primary key. Every sqlCommand would have something appended on the end. like select * from customers where city = 'Palm Beach' and customerID in (select primaryKey from webUsers where permission like '%+R% and webUserId='@webUserID') and (select primaryKey from webUsers where permission like '%+R% and webUserId='@webUserID') not null


    We are still looking at the proper authorization model, including finding the appropriate authorization granularity.

    I agree that instance-level security is interesting, particularly from the application building perspective; however, making it fast is difficult, and making it work over arbitrary stores is not always possible.

    As we think about this I'll try to post on my blog on the various thoughts/approaches we consider. If you have input on the topic, I'd love to hear it.

    -pablo
  • Pablo Castro: Astoria Data Services

    JoshRoss wrote:
    How can you Tamperproof URIs for CRUD operations? What about the Cross site request forgery problem? How do other REST implementations deal with this? Will Astoria go through a standardization process like WS*? This should be Secure and Simple. Because there are existing solutions that are secure or simple, but not both.


    Not sure what you mean by tamperproof. Do you refer to protecting the URIs themselves? Or protecting the app from users creating their own URIs? The Astoria CRUD interface is no different from a regular website in some sense. In a "typical" application, at some point you fill up some fields and click "submit", which causes an HTTP POST. Anybody can see that URL and send a POST to it. The server-side code has to make sure you had the rights to do so (even if your webpage wasn't used to submit the request), and that the operation makes sense for the app. Astoria entry points have similar requirements; you have to indicate the authorization requirements, and you have to think about the consistency rules that apply to any side-effecting operation that you expose through the interface.

    Regarding cross-site request forgery, Astoria does some now, and you can expect more to come. First, HTTP GET requests are non-sideffecting by default (unless you introduce side-effecting operations explicitly). Doing cross-site non-GET requests is much harder. We'll also probably apply the usual techniques such as requiring special HTTP headers to make sure even more that the request is coming from an allowed site.

    It's really early to talk about standarization. We'll see how things turn out. I think that the web development community in general (including ourselves) still has a number of things to sort out in the HTTP (REST-style) space.

    -pablo
  • Pablo Castro: Astoria Data Services

    Arturo wrote:
    I have to pause and ask.  Why do I need this?  It seems like technology in search of a solution.   I have some security concerns as well with Astoria. 


    Hi Arturo,

    Here are some resources that hopefully will help clarify the scenarios we're going after with this technology.

    Overview document:
    http://astoria.mslivelabs.com/Overview.doc

    Mix 2007 presentation:
    http://sessions.visitmix.com/default.asp?year=All&event=1011&sessionChoice=2011,2012&sortChoice=4&stype=asc&id=1573&search=XD006&rsscheck=rss

    We do, of course, design things top-down; we start with application scenarios and go from there.

    Regarding security concerns, could you be more specific? I'd be really interested in hearing them so that I can either elaborate on how we think about specific security aspects, or add them to the list of things to sort out if I haven't heard of/thought of the issue before.

    Thanks,
    -pablo

  • ADO.NET Entity Framework: What. How. Why.

    davida242 wrote:
    
    Pablo [MSFT] wrote: 

    5. While you can re-factor the model (and we'll propagate the changes to your object model in CLR classes), we won't automatically propagate the changes through the mapping, at least not this time...



    Not propagating a rename into the DB (that is what you mean by "through the mapping", right?) seems perfectly fine! Database refactorings are a complicated class of things by themself, as you have to take care of the change scripts that need to deployed etc.

    What would be nice is if a rename of a database object in the new VS Team Database role would propagate into the mapping file Just into it, not through it, actually.

    Also, when you say that renames will propagate into the CLR objects. Does that mean you will use the rename refactoring code that is in VS to do that? That would be incredibly cool! If  I change the name of a property in my entity, and all C# code that references that property in code (i.e. not the class that represents the entity, but the code that uses that class) would update automatically.


    We talk with the VS team database folks often; it's not gonna happen now, but it's reasonable to think of some integration there as you point out, we'll see how things go Smiley

    Regarding renames into the CLR objects: no, we don't "refactor" the types, we re-generate them. When you design a model using the EDM schema designer or your favorite XML editor, once you're done we generate the CLR classes that represent each of the entities for you. The generated code consists of partial classes, so you can add your own stuff in a separate file, which means that we can simply re-gen the the types whenever we see a new schema, without worring about overrwriting customizations to the classes.

    -pablo
  • ADO.NET Entity Framework: What. How. Why.

    ebdrup wrote:
    Great stuff!
    I would really like to see more on how you create the actual Entity mappings, when will the beta be availabel for download and when will this ship?


    We're planning on doing "something" (screecast, video or something else) to talk about the model and how it maps.


    Regarding availability of bits, we're shooting for a CTP in August.

    -pablo
  • ADO.NET Entity Framework: What. How. Why.

    DiegoV wrote:
    As I mentioned before, one of my great concerns is how team development will look like with the Entity Framework. I took some time to detail my thoughts:

    First, many real life projects are partitioned in modules, so their data layers are partitioned likewise.

    Often, there are sets of tables that are used exclusively in each module, and a set of tables that are common to all. Yet, there are some tables that are resued in more than on application (typical examples are security, navigation, etc).

    Besides, building a useful data layer is not done in one step nor does it take a single day. It is more often an evolutionary and error-prone process in which a programmer “imports” objects from the database each time he/she realizes they are mentioned in the specification.

    Good point. We do have some modularization mechanisms (more details inline with your questions), but I think you put it in interesting terms, that is a good way of thinking about how metadata is organized and deployed.

    I have included some comments below on the specifics. Of course, as in any software product in development, things are subject to change Wink


    DiegoV wrote:
    
    1. Partitioning of the conceptual model in multiple files and assemblies.
    2. Referencing and extending (entity inheritance) between entities defined in separate files and assemblies.
    3. Creating reusable “libraries” containing entities and mappings that can be reused by different modules or different applications.
    4. "Incremental" reverse engineering of databases (I think this one is already in the graphical design tool). 
    5. Support for basic refactorings (unification, replacement, renaming, etc).
    6. Very readable and maintainable XML (it should be easy to merge two files with a source code comparison tool).
    7. Efficient and easy serialization of entities and entity sets outside the database.
    8. Separation of the conceptual model from the persistence logic (take a look at what Steve Lasker does with typed datasets).
    9. A migration tool for typed datasets XSDs. 
    10. A degree of resiliency to some schema changes.



    1. Yes, you can partition the model in multiple files

    2. Yes, you should be able to do this (although some glitch here or there may complicate things)

    3. Yep (you may need to deploy a library + metadata)

    4. We currently don't have plans for automated incremental reverse engineering. Currently we do "one shot" reverse engineer and then you can maintain the resulting model by hand. Is that something you could live with for the initial release?

    5. While you can re-factor the model (and we'll propagate the changes to your object model in CLR classes), we won't automatically propagate the changes through the mapping, at least not this time...

    6. "very readable"...well, it's XML, so you can read it Smiley; in my experience, in most cases you can design "good looking" XML that works well for small/medium data-sets, but as the amount of data you need to represet grows, things get tricky regardless of the actual schema; there are other aspects that need to be considered and balanced, such as the evolution of the schema across versions of the framework and making sure there are no ambiguities. That said, we are looking at making sure the XML is relatively clean.

    7. Our plan is to have a mechanism to enable shipping of entities across tiers and allow for the system state to be reconstructed later on, however, that doesn't not include taking care fo serialization itself. We assume that you'd use any of the already-existing serialization infrastructures.

    8. Following the typed-table pattern, what you're saying is that you'd like the option to have the "typed ObjectContext" in one assembly and the domain classes in another one, is that right?

    9. We don't currently have one planned, but hey, we do have a developer community that might be interested in contributing a few of these nice tools Smiley

    10. The mapping infrastructure does provide a good degree of isolation from schema changes for the applications built on top of a conceptual model. This requires that you manually update the mappings to map to the new schema, but other than the map everything else should go untouched (of course, there are certain types of changes that we cannot compensate for).


    Hope this helps clarify some of the issues. This provided me with good perspectives on certain problems, thanks for the write up.

    -pablo

  • ADO.NET Entity Framework: What. How. Why.

    schrepfler wrote:
    Well, although I like the xml approach (it's least invading) I can't help but notice that the java world passed from xml to annotations (which might be also a limitation, java doesn't have partial classes so there can be one view to a model or else they'd need to copy the code that would lead to more mantainence).


    Yep, we're aware of that, and we're actually considering supporting attributes as well, although there is no firm plan yet.

    Note that although Java folks introduced support for attributes, their adoption is not necessarily great. I remember sitting in a talk (I think it was on new EJB 3.0 stuff) in JavaOne a couple of years ago and when the speaker did a show-hands for who'd use attributes over xml files, it was like a 9-to-1 deal, with most folks preferring xml files (or may be more accurately, external metadata).

    schrepfler wrote:
    As far as the exceptions model the only concrete example I know of is in the spring framework where they have their own exception hieararchy and they provide a way to translate the concrete vendor's exception (and it's amazing how many orm's they support).

    We have some generic exceptions, but I do expect that some provider-specific exceptions will show up, at least for this release (which means that we won't be able to change that as a default behavior because it would be a breaking change...).

    You're right that there are some frameoworks out there that have a normalized exception hierarchy (Hibernate 3.0 had that as new of the big new features IIRC).

    -pablo
  • ADO.NET Entity Framework: What. How. Why.

    schrepfler wrote:
    ...you always show cases where the DB exists before the app. Instead of this data driven approach will there be a clear model/domain driven approach where we write our entities ourselves? If so what will the ways to express these relationships be, attributes, xml, reflection, other?


    Yes, we'll have various options, some in the version we're working now, some on future versions, and yet some will be supported but may be will require tools from 3rd parties or the community.

    Specifically, you can:

    - Reverse engineer a schema from a database; that's what I did in the first example, and it's handy to get a starting point to either code against it directly or start modifying the model from there.

    - Create a model describing your entities, your relationships, etc. in the model designer or in XML, and then describe how the various elements of the model map back to your database schema. For mapping you can use the tools or XML files.

    Other options will probably come later.

    Once you have a model (regardless of whether it was hand-written or generated from a database) you can fully explore the model using our metadata APIs.

    NOTE: visual tools won't be included in the August CTP, so you'll have to do this with the XML files, but we WILL include the option to reverse engineer a database do you have a starting point.

    schrepfler wrote:
    How are transactions handled?


    The short answer is that we're integrating the system with System.Transactions for transaction management. We also do automatic transactions for update processing.

    I'm finishing off some of pending details about transactions. Once I have all the details I'll post it somewhere so you guys can chime in.

    Watch http://blogs.msdn.com/adonet

    schrepfler wrote:
    Will there be a rich exception model?


    There will be an exception model...I don't know what's the bar for calling it "rich" Smiley - we'll do a CTP in August, I'd love to hear your feedback about error handling in general if you look at the bits.
     
    schrepfler wrote:
    Can entities be lazily fetched and how to reattach them to fetch children if it's in another domain?


    Yes, you can fetch entities lazily, but you have to do it explicitly. I'll write up a discussion about this in the next week or so to get some opinions on the specifics.

    Re-attaching...we're thining about this. I think that we have a good plan, it won't be in the CTP but once it's solid we'll make it public to gather feedback.

    schrepfler wrote:
    Can we generate and update the schema directly from the model?


    We aren't planning to include this functionality in the initial release of the Entity Framework. It's something we could do in a future version, or may be the community picks it up and does a nice tool Smiley


    Anyway, thanks for sending thoughts and questions, keep the feedback coming!

    -pablo