Coffeehouse Thread

15 posts

LINQ: The end of n-tier?

Back to Forum: Coffeehouse
  • User profile image
    W3bbo

    After using LINQ (and LINQ to SQL) for a few weeks now (and about 1/5th the way through LINQ in Action) I've noticed it's made my application designs a lot more simpler. Too simple, in fact. Simple enough to warrant removing any kind of data access layer and just using LINQ directly from the Front-End.

    It seems like a foolish descision, but besides the occasional bit of shared biz-logic (which can go in the *DataContext partial class anyway) are there any good arguments against this? One of the things I like is how I can use the Anonymous Types directly and reduce DB hits for data I don't need, something that would be nearly impossible (or just painful) with a traditional 3-tier approach.

    Then again, with LINQ to SQL, are my Context classes my new data tier?

  • User profile image
    Frank Hileman

    Even without LINQ, that extra tier is often overkill for small projects. I noticed people focus so much on providing OO containers for data, they forget about general purpose approaches based on weak typing (LINQ like techniques are old but under-used). If your data is fundamentally "dumb" or a read only view, that extra tier may be unnecessary code.

    If we can get rid of relational databases, there is even less need for wrapper layers. Look up "semantic data modeling".

  • User profile image
    figuerres

    Like any tool it can be great or mis-used.

    I personaly love LINQ and DLINQ for what they are.  I do not see them as an ORM nor as a total soultion.... but as a tool.

    I like for example that I can build a tree of objects in memory and then do a .SubmitChanges() and know that it commited all the data as a transaction w/o me having to mess about with declaring the sql transaction and writing code to make sure I got the right ID from a parent record to then copy into each child to make sure the relation is right....

    I hated in the "old days" having to create a class just to hold some data while my code was working on something....
    I like beeing able to say:
     Order O =  db.Orders.Single(x=>  x.ID == iIDNumber);

    and then test O.whatever till I am done and my useing( datacontext){}  goes out of scope.

    very handy stuff....  not the cure-all for everything, but a nice power tool to let me get from A  to B faster.




  • User profile image
    funkyonex

    What happens when you want to access data from a windows client over an unreliable network like the internet? What about interfacing with other software systems?

    Losely coupling your data access layer not only helps with maintenance (and I agree with you that this is getting to be much less of an issue) but you're forgetting the real reasons for a tiered approach -- physical separation to achieve better scalability (not better performance, better scalability) and easier interfacing across applications. A stateless middle/data tier is going to fair much better when thousands of clients are calling it. Farming out cheap middle-tier servers or using queued components is better than a nasty database upgrade. And regardless if you're using SOA or not, if you need to interface with other systems then some sort of backend service is usually necessary because your clients aren't going to have direct access to all those data stores. Even if you did, you want to figure out the schema for all of them and what queries you need to write? It's much easier to figure out service contracts (=what method to call, what do I pass it, what does it return).

    That said, I agree that not everyone needs that level of scalabiltity or interoperability. But sometimes you don't know how your application will be used 10 years down the road. I'm not advocating over-architecting a simple application, but following a layered approach up front (I'm talking just simple logical separation) may save you a lot of pain down the road because you can add services later much easier.

    As always, your milage may vary.

    And relational databases are not going away even if everyone stopped selling them. Software these days needs to interface with multiple data sources and most of them are already humming along running the business with a relational back end. Don't you wish all of them had services you could just call? Don't make your application a silo unless you know for sure it will always be a silo.

    Cheers,
    -Beth

  • User profile image
    figuerres

    funkyonex said:

    What happens when you want to access data from a windows client over an unreliable network like the internet? What about interfacing with other software systems?

    Losely coupling your data access layer not only helps with maintenance (and I agree with you that this is getting to be much less of an issue) but you're forgetting the real reasons for a tiered approach -- physical separation to achieve better scalability (not better performance, better scalability) and easier interfacing across applications. A stateless middle/data tier is going to fair much better when thousands of clients are calling it. Farming out cheap middle-tier servers or using queued components is better than a nasty database upgrade. And regardless if you're using SOA or not, if you need to interface with other systems then some sort of backend service is usually necessary because your clients aren't going to have direct access to all those data stores. Even if you did, you want to figure out the schema for all of them and what queries you need to write? It's much easier to figure out service contracts (=what method to call, what do I pass it, what does it return).

    That said, I agree that not everyone needs that level of scalabiltity or interoperability. But sometimes you don't know how your application will be used 10 years down the road. I'm not advocating over-architecting a simple application, but following a layered approach up front (I'm talking just simple logical separation) may save you a lot of pain down the road because you can add services later much easier.

    As always, your milage may vary.

    And relational databases are not going away even if everyone stopped selling them. Software these days needs to interface with multiple data sources and most of them are already humming along running the business with a relational back end. Don't you wish all of them had services you could just call? Don't make your application a silo unless you know for sure it will always be a silo.

    Cheers,
    -Beth

    Good Points and well said Beth!

    one example:

    I have a client app that connects with asmx web services to my datacenter, in fact I really have serveral that work that way...

    so the contract between the service and the client just has classes that hold data.
    inside the web service i use DLINQ to do stuff but anything that goes to the client or comes from the client has no traces of LINQ.

    and I have CE handhelds and WIndows Desktops that share the same set of web services.

    now as I am updating the cleint apps they are starting to use LINQ localy to work with the local Objects... but again when the data goes out it's plain old classes that hold data.


  • User profile image
    funkyonex

    figuerres said:
    funkyonex said:
    *snip*
    Good Points and well said Beth!

    one example:

    I have a client app that connects with asmx web services to my datacenter, in fact I really have serveral that work that way...

    so the contract between the service and the client just has classes that hold data.
    inside the web service i use DLINQ to do stuff but anything that goes to the client or comes from the client has no traces of LINQ.

    and I have CE handhelds and WIndows Desktops that share the same set of web services.

    now as I am updating the cleint apps they are starting to use LINQ localy to work with the local Objects... but again when the data goes out it's plain old classes that hold data.


    Yep, that's a great solution.

    Of if you're just looking for a simple architecture that provides that scalability you need, you could set up n-tier datasets (http://msdn.microsoft.com/en-us/vbasic/cc138242.aspx) and then use LINQ to Datasets on the client (http://msdn.microsoft.com/en-us/vbasic/bb737877.aspx), and use sync services for caching (http://msdn.microsoft.com/en-us/vbasic/cc307991.aspx ).

    Of course this can solve the scalability/occasionally connected issues but it isn't SOA so you'd still probably want to define some contracts for interoperability if you need it.

    Cheers,
    -Beth

  • User profile image
    figuerres

    funkyonex said:
    figuerres said:
    *snip*
    Yep, that's a great solution.

    Of if you're just looking for a simple architecture that provides that scalability you need, you could set up n-tier datasets (http://msdn.microsoft.com/en-us/vbasic/cc138242.aspx) and then use LINQ to Datasets on the client (http://msdn.microsoft.com/en-us/vbasic/bb737877.aspx), and use sync services for caching (http://msdn.microsoft.com/en-us/vbasic/cc307991.aspx ).

    Of course this can solve the scalability/occasionally connected issues but it isn't SOA so you'd still probably want to define some contracts for interoperability if you need it.

    Cheers,
    -Beth
    SYnc is interesting, also the Mesh stuff, waiting to hear more about Mesh and are they seperate or does Mesh use SYnc?

    on the handheld / ce side I use http://gotcf.net/ CF.Serialization for binary serialization
    Generics + Linq for the data manip stuff.  the amount of data held localy is small and not very complex.

    on the desktop app only the current work and one list are kept local, they are always connected and the data is time-sensitive.

    our main customer's system was built in 2005 in about 4 months, our main sql db is about 30 gigs, old data gets pulled to a backup about 2 times a year.  we have about 50 pc's and about 30 handhelds.

    the servers are in a data center and the pc's and handhelds are on cable modems so leting the clients connect to the server over https works very well.

    I wrote almost the whole system, a few folks have done bits when I was to busy.
    the core is all my work.

    I started using .net and reporting services when they first came out. I also contributed to the PAG group that led to CLickOnce.

  • User profile image
    vesuvius

    I think Beth did remarkably well not to link to the linq articles on her blog about Datasets Vs Objects. Channel nine Link Editor not working yet again!http://blogs.msdn.com/bethmassi/archive/2008/04/12/linq-to-sql-n-tier-smart-client.aspx

    I started a project that I expect to scale. I also want to upgrade it to WPF and Silverlight so more than one customer can use it. I started the project with Linq to SQL and ended up with a project that was laden with hundreds of objects for no real return on investment. Try databinding to anonymous types and you soon find yourself in murky waters. I ended up going back to datasets and I've not had a single problem because the technology is mature and most third party components like reporting, charting and gauges are made for datasets. The project is considerabley smaller, because of the reduced object count. It is also far easier to understand if looking at it for the first time because of the structure.

    The chief advantage performance-wise is the disconnected nature of datasets and the fact that you can hook up a SQL compact database to a local machine and sync it to the service layer. Talk about reducing database hits, Linq is nowhere near this level of maturity. Anders and crew are ruminating on C# .4.0. I've tried the ADO.NET Entity framework and that is really where you want to concentrate your efforts, because the ORM is practically identical to Linq but far much better. At least you have associations between tables in the ORM you can program against, and it is a framework, whereas Linq to SQL is just a 'posh' SQL adapter with an ORM.

    The long and the short of it is, these technologies are relatively new and no formal patterns and practices have been identified and written about, and none solve every single ADO.NET problem. The trick is to identify your specific requirement, and use the right ADO.NET component, but usually only experience gives you this.

  • User profile image
    Maddus Mattus

    vesuvius said:
    I think Beth did remarkably well not to link to the linq articles on her blog about Datasets Vs Objects. Channel nine Link Editor not working yet again!http://blogs.msdn.com/bethmassi/archive/2008/04/12/linq-to-sql-n-tier-smart-client.aspx

    I started a project that I expect to scale. I also want to upgrade it to WPF and Silverlight so more than one customer can use it. I started the project with Linq to SQL and ended up with a project that was laden with hundreds of objects for no real return on investment. Try databinding to anonymous types and you soon find yourself in murky waters. I ended up going back to datasets and I've not had a single problem because the technology is mature and most third party components like reporting, charting and gauges are made for datasets. The project is considerabley smaller, because of the reduced object count. It is also far easier to understand if looking at it for the first time because of the structure.

    The chief advantage performance-wise is the disconnected nature of datasets and the fact that you can hook up a SQL compact database to a local machine and sync it to the service layer. Talk about reducing database hits, Linq is nowhere near this level of maturity. Anders and crew are ruminating on C# .4.0. I've tried the ADO.NET Entity framework and that is really where you want to concentrate your efforts, because the ORM is practically identical to Linq but far much better. At least you have associations between tables in the ORM you can program against, and it is a framework, whereas Linq to SQL is just a 'posh' SQL adapter with an ORM.

    The long and the short of it is, these technologies are relatively new and no formal patterns and practices have been identified and written about, and none solve every single ADO.NET problem. The trick is to identify your specific requirement, and use the right ADO.NET component, but usually only experience gives you this.

    Now this is a proper thread Wink

    Here's an idea; Why not replace the code generated classes with a typed dataset?

    That would solve a lot of my problems!

  • User profile image
    figuerres

    vesuvius said:
    I think Beth did remarkably well not to link to the linq articles on her blog about Datasets Vs Objects. Channel nine Link Editor not working yet again!http://blogs.msdn.com/bethmassi/archive/2008/04/12/linq-to-sql-n-tier-smart-client.aspx

    I started a project that I expect to scale. I also want to upgrade it to WPF and Silverlight so more than one customer can use it. I started the project with Linq to SQL and ended up with a project that was laden with hundreds of objects for no real return on investment. Try databinding to anonymous types and you soon find yourself in murky waters. I ended up going back to datasets and I've not had a single problem because the technology is mature and most third party components like reporting, charting and gauges are made for datasets. The project is considerabley smaller, because of the reduced object count. It is also far easier to understand if looking at it for the first time because of the structure.

    The chief advantage performance-wise is the disconnected nature of datasets and the fact that you can hook up a SQL compact database to a local machine and sync it to the service layer. Talk about reducing database hits, Linq is nowhere near this level of maturity. Anders and crew are ruminating on C# .4.0. I've tried the ADO.NET Entity framework and that is really where you want to concentrate your efforts, because the ORM is practically identical to Linq but far much better. At least you have associations between tables in the ORM you can program against, and it is a framework, whereas Linq to SQL is just a 'posh' SQL adapter with an ORM.

    The long and the short of it is, these technologies are relatively new and no formal patterns and practices have been identified and written about, and none solve every single ADO.NET problem. The trick is to identify your specific requirement, and use the right ADO.NET component, but usually only experience gives you this.
    vesuvius:  you may note that in my posts I am using LINQ only to do specific tasks and not tying to build a whole system based on it.
    as I have said many times:  it's a tool, it can be mis-used or over-used like any tool.  I am not trying to say you are wrong in trying to use it. it am not saying that your points about it's beeing new etc... are 100% wrong.

    what I do say it that carefull use in some places is ok.  I can pull it out if I find a problem in the places I have it. But so far the way I have used it is working 100% for me in my apps.  YMMV is a given. 

    I will if I may borrow a bit of what you said but my version has a small edit:

    "The trick is to identify your specific requirement, and use the right  components and technologys, but usually only experience gives you this."

    that is indeed the key, knowing when to stop, when to stick with the known and tested, when to try a new thing, and how to back out if it does not work before you lose your shirt.  that's the stuff that the new folks do not have and the ones who have done this stuff for a while learn if they are good at it.

  • User profile image
    figuerres

    Maddus Mattus said:
    vesuvius said:
    *snip*

    Now this is a proper thread Wink

    Here's an idea; Why not replace the code generated classes with a typed dataset?

    That would solve a lot of my problems!

    Possibly some kind of "adpater" class that can transform type x to type y and back again ?? 

  • User profile image
    vesuvius

    figuerres said:
    vesuvius said:
    *snip*
    vesuvius:  you may note that in my posts I am using LINQ only to do specific tasks and not tying to build a whole system based on it.
    as I have said many times:  it's a tool, it can be mis-used or over-used like any tool.  I am not trying to say you are wrong in trying to use it. it am not saying that your points about it's beeing new etc... are 100% wrong.

    what I do say it that carefull use in some places is ok.  I can pull it out if I find a problem in the places I have it. But so far the way I have used it is working 100% for me in my apps.  YMMV is a given. 

    I will if I may borrow a bit of what you said but my version has a small edit:

    "The trick is to identify your specific requirement, and use the right  components and technologys, but usually only experience gives you this."

    that is indeed the key, knowing when to stop, when to stick with the known and tested, when to try a new thing, and how to back out if it does not work before you lose your shirt.  that's the stuff that the new folks do not have and the ones who have done this stuff for a while learn if they are good at it.

    I agree with you completely. I am probably giving the wrong impression of a one or the other  type choice, in truth you can and should use both. Type inference is such a 'handy thing' and if any of you have tried Beth's WPF videos she tends to 'sprinkle' the examples with Linq when it saves time, or makes sense to do so. I for one can't wait until further updates have been made to both Linq and the ADO.NET Entity framework, because it is an elegant way to be dealing with data.

    A personal thing I have found though, is when you use the so called lower level API's, be it datasets or XMLTextReaders and Writers, is that you have a slightly higher degree of control of what is happening. This is not always needed, so higher abstractions are created, but knowing a technology that is a little more 'bare bones' and less 'fluff' does always have its advantages.
     



  • User profile image
    figuerres

    vesuvius said:
    figuerres said:
    *snip*

    I agree with you completely. I am probably giving the wrong impression of a one or the other  type choice, in truth you can and should use both. Type inference is such a 'handy thing' and if any of you have tried Beth's WPF videos she tends to 'sprinkle' the examples with Linq when it saves time, or makes sense to do so. I for one can't wait until further updates have been made to both Linq and the ADO.NET Entity framework, because it is an elegant way to be dealing with data.

    A personal thing I have found though, is when you use the so called lower level API's, be it datasets or XMLTextReaders and Writers, is that you have a slightly higher degree of control of what is happening. This is not always needed, so higher abstractions are created, but knowing a technology that is a little more 'bare bones' and less 'fluff' does always have its advantages.
     



    Yeah, right now I am looking at "tuning up" a bit of code in my handheld app.

    works fine when the user inserts a few items in an order.

    but when the number of lines grows perf lags.

    this is a re-write of an app and now that a bunch of new code is working I can focus on Perf issues.

    the first version works.

    I think the problem is when it updates the UI on the CE device,  I may have to cast to an interface so I call a method that is hidden.

    the upshot is today it does a COntrols.Clear() and then a loop that does .Add on a panel that makes a "grid like" list of items.
    what I reall need to test is not using the .Clear()  with the loop,  in stead cast to IList  and call .Insert(index, Control)
    that saves a possibly huge update of the UI.

    NOTE:  when first coded that list was static, then they wnated to add items, now that it works they are adding a *LOT* of items so the needs have chnaged so the first design was ok for the expected use.  but the use has changed.

    I could have tossed the panel control or written my own collection of child controls but by knowing to see what interfaces the ControlCollection impliments I know that I should be able to use a cast to get a method call that saves me all that work.

  • User profile image
    Red5

    I have very tiny exposure to LINQ, but it seems like a possible candidate to use for validation.
    Where you are trying to save a record and you need to compare your record's data against the bulk of data in the database.
    Any thoughts on that kind of usage?

  • User profile image
    staceyw

    It a question that linq makes you think about for sure.  For me, seems the answer comes down to is this a single app or a platform.  If you want a platform that can be "surfaced" with web, rich clients, console clients, POX, RSS, REST, etc, then you want your biz logic layer api at middle and surface that api in various ways - but they all use same biz logic layer with 1 place to change it.  That is the primary value imo.  Nervona for me would be finish Volta where you don't care about DTOs.  The platform abstracts that whole issue, you just tier split your client and server as needed with attributes and biz logic layer could have linq queries to backend or something else.  The backend could cache and distribute load automatically.  The code would be correct by construction (and verified) to make is safe for stateless distribution on the backend as needed.  Deployement is as simple and pushing a button and testing as normal.  You could still use linq on the client, but not for queriing the db directly, but for munging returned collections.  APIs on the server tier could be surfaced to non-volta clients as pox or rest via declaritive config.  Mesh on the other hand adds an interesting twist here.  It, by design, forces a seperation between layers because you program against local data that has been sync'd.  This is a nice model because you program local with all comm abstracted and can use the kind of object model you want (or don't want) with the data.  Some interesting times we are in.

Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.