Coffeehouse Thread

17 posts

General Programmer Question : how much do you reuse code/classes?

Back to Forum: Coffeehouse
  • Dr Herbie

    I've need reading around the 'Service Locator is an Anti-Pattern' meme trying to figure out why this should be an anti-pattern.  So the reason for it's anti-pattern status is simply that it makes classes harder to re-use compared to Composition Root (because of having a dependency on the service locator).

    This made me realise that I haven't re-used classes since I was an academic.  In my professional career, 95% of the code I write is business specific and only ever gets used within the company I work for (where we always use the same underlying framework, so the Service Locator is already there).

    I've work on long-standing LOB software projects, so it's not like I'm starting new projects very often.

    To what degree do you re-use classes and for context what sort of software do you normally write?

    Herbie

     

  • evildictait​or

    Really? I reuse classes all the time.

    Partly the way that I achieve this though is that every single problem (e.g. disassemble a file, or parse HTML or magic a graphical widget into existence) I write as a library.

    A consequence of this is that all of my Applications are really tiny, but import a bajilion libraries, and call the logic from each library. The Application is then almost entirely graphical stuff* (like this widget goes over here in this docking panel next to this thing that does notifications) and glues user events like mouse clicks into calls into the respective libraries.

    (* by which I mean, interacts with graphical controls defined in libraries )

    In my code, if logic that isn't strongly coupled to the Application (i.e. code that has any reuse value whatsoever) is going into the Application and not into a library, it automatically feels like I'm doing it wrong.

    The cool thing is, once you start programming like that, you quickly learn that there's no need to have a Service locator, because the service locator is the app itself. All of the power and all of the functionality of your code gets hived off into different libraries which because of the law of C# dependencies need to be carefully designed to avoid cyclic dependencies and hence end up being naturally decoupled, and the really cool thing is that next time you build a new application, it can quickly import the same libraries you've just built. Want that GUI component that you built on the other project? Import. Done. Want to disassemble this other type of file compiled for a different architecture? Import. Done.

    And when your other app wants more power out of your GUI component, you update it in the library and when you go back to your other project, your libraries have automatically improved underneath the app because it will automatically inherit the code from where-ever you're using it.

    I am seriously immensely glad that I decided to move to that model about 5 years ago. It has been a complete god-send to be able to never have nasty coupling and always have APIs that were always meant to be libraries to call instead of just having a huge mash of code to deal with.

     

    For context, I write all kinds of software, but recently I've written web servers, database engines, compilers, web-pages, operating system boot loaders, UEFI device drivers, protocol parsers and fuzzers, as well as GUI components for when I get bored to render the results in a pretty way Smiley

  • Bas

    @evildictaitor: I'm not saying your way is wrong, nor am I under any illusion that I'm in any way qualified to judge your method as 'wrong', if there is such a thing, but that approach sets off two alarm bells in my head.

    The first one is complexity: an application that is basically just a place where you import a gajillion libraries sounds pretty complex to wrap your head around.

    Secondly, I'm a firm believer in the whole "You aren't going to need it" thing. I used to write everything, everything I was working on as a reusable component/library, and I eventually realised that I spent a lot of my time on making things I wouldn't ever reuse reusable. Nowadays, I take the approach that I write it so that it works, and if I ever need something I know I've already written before, only then do I take what I wrote and make it reusable.

    For instance, if I need some sort of special value converter for a weird databinding, I just write the convert method with some rudimentary type checking and that's it. If I'm working on something else that needs the same value converter, only then do I take the one I originally built, stick it in a library, make sure it's robust enough to survive anything thrown at it, and implement the ConvertBack method. Maybe I won't need it a third time, but the fact that I needed it twice is a good indicator to me that it needs to be reusable. The fact that I needed it once isn't. Obviously I haven't kept any metrics on this, but it feels like I've saved lots of time with this approach.

    For context: I used to write a lot of ASP.NET webapps and some WPF stuff, now I primarily work on relatively single-use WPF applications and some Windows Runtime stuff.

  • evildictait​or

    , Bas wrote

    @evildictaitor: I'm not saying your way is wrong, nor am I under any illusion that I'm in any way qualified to judge your method as 'wrong', if there is such a thing, but that approach sets off two alarm bells in my head.

    The first one is complexity: an application that is basically just a place where you import a gajillion libraries sounds pretty complex to wrap your head around.

    In theory, perhaps, but in practice the opposite is true. Since all libraries do exactly one thing, but do them well, you end up with less complexity. You also get the benefit of private classes for a library being explicitly hidden from the application, since they have assembly-scope.

    What it really gives you is that it forces all of your little silos of code (like functional units and components) into formal API contracts that deliberately make it hard for business logic to end up in functional code or vice-versa. Under my scheme it's easier to build a good component that is easy to import (since that's how all components are used normally) than to have a sprawling inter-dependent mess for the simply reason that you only see high level exposed APIs that are designed to be easy to use and import, rather than designed to be coupled to the place you first invented it for.

    Good API design is the key to success in the industry; it decouples your code, increases the ability for you to refactor your code later and improve it, and reduces the ability for code to "hack" at your component, increasing the hurt when you change the component later.

    Secondly, I'm a firm believer in the whole "You aren't going to need it" thing. I used to write everything, everything I was working on as a reusable component/library, and I eventually realised that I spent a lot of my time on making things I wouldn't ever reuse reusable. Nowadays, I take the approach that I write it so that it works, and if I ever need something I know I've already written before, only then do I take what I wrote and make it reusable.

    I never write code that I don't have a need for right now. This isn't so much an "in principle" thing, it's just that why write code that you might need in future when I'm so busy writing code that I do need right now. If I need that class, function or conditional-branch later, I'll code it then.

    For instance, if I need some sort of special value converter for a weird databinding, I just write the convert method with some rudimentary type checking and that's it. If I'm working on something else that needs the same value converter, only then do I take the one I originally built, stick it in a library, make sure it's robust enough to survive anything thrown at it, and implement the ConvertBack method. Maybe I won't need it a third time, but the fact that I needed it twice is a good indicator to me that it needs to be reusable. The fact that I needed it once isn't. Obviously I haven't kept any metrics on this, but it feels like I've saved lots of time with this approach.

    As I said before, if I have code that is highly coupled (by design) and clearly has no use on its own, it goes in the application (or nearest the code that needs it).

    Example: Let's suppose I want to write a clone of Visual Studio.

    1. Create a WinForms app.

    2. On the designer, decide which of my UI controls I want, and import them as libraries.

    3. Drag and drop them onto the canvas.

    4. Set a couple of events, e.g. on menu handlers, key presses, mouse events or whatever exposed by the UI elements.

    5. Let's link in a C# compiler, which is a library which imports a C# tokeniser and a C# linker

    6. Pass the user data off to the library.

    Now let's suppose we want to add an automatically syntax-highlighting control. This is relatively coupled to the app, but it's likely to ( a) be big, and (b) be something that someone else might want for a different app (e.g. syntax highlighting VB in Excel), so we create that as a different UI project. It imports the tokeniser base library which has an ITokeniser and which allows the application to pass it a C# tokeniser for syntax highlighting without coupling the syntax highligher control to the C# tokeniser. It also keeps from cluttering the namespace of people who import it because we can have a really small API footprint even if the code behind it is quite big, increasing the (re)usability of the library.

    Now what if you want to add a HTML tokeniser? Well, add the HTML tokeniser you built two years ago when parsing HTML. Shazam, magically it works. Note that it all works because everyone depends on as few other libraries as possible, with the exception of the top-level Application who imports everything. That way, the C# tokeniser isn't tied to any UI stuff, the HtmlTokeniser isn't tied to the C# compiler, the DockingManager doesn't understand anything about your business logic and the application is clean and tidy because it describes how the app works without delving into the low level mechanics of how it works.

    By tying your code together as libraries, you get crazy performance boosts to writing applications because your code automatically gets written to be imported and reused, and because of the assembly separation of libraries, it's deliberately hard for someone to rely on how the controls/functional units work, and they have to use the APIs; allowing greater decoupling of components, faster switching out of implementations for better ones and generally higher reuse of code. It also forces you to not think about UIs or business logic when you're in functional code,  to not think about functional code or business logic when you're designing UIs, and to not think about UIs or functional code when writing business logic.

    Maybe I'm not explaining it very well. But seriously. I credit that decision as one of the key reasons for the success of some of the programs that I've written, and I seriously would never turn back. It's a bit of a pain to start off with perhaps, but after a small number of libraries you'll find that you're easily making up the time wasted right clicking "New C# library" in a VS solution

  • Dr Herbie

    @evildictaitor: I suspect that the context is important here -- you seem to work on lots of smaller project, while I have worked in the same large LOB system for the last 7 years. We only really use IoC to allow us to reverse engineer unit tests on some of the core brownfield code; 99% of our code will ever see the inside of another project and those bits that might are at the very bottom of the dependency tree and therefore don't rely on anything else.

    Herbie

  • wkempf

    The problem with SL is that it hides the dependencies. From the outside the only thing we know the class depends on is the SL container. This makes using the class nearly impossible without actually reading the implementation code to know what it really depends on.

    Contrast this with DI, where the dependencies are clearly visible from the outside (generally, they are constructor parameters). You know exactly what is necessary to use the class.

    I'm not necessarily sold on Composition Root. This basically says you've created the ENTIRE object graph upfront. I don't think that makes a whole lot of sense. Just as an example, in any GUI application you're going to be creating and destroying Windows/Dialogs/Pages/etc. quite frequently. Every one of those objects is going to need to be injected with services as well. This means Composition Root isn't going to work real well here. That's just one example... there's hundreds more.

    As for reuse... I reuse a lot of code. Even in LOB applications there's still going to be a lot of room for reuse. Even if 95% of the code is business logic (I find that surprising) there's still going to be a LOT of code that can be reused.

  • Bass

    I don't go through special lengths to make my code reusable per se, but just following the standard practices (good naming, do one thing well, etc.) it doesn't become very difficult to reuse something if it is especially needed, if your code is clean. Basically the same things that make your code easy to understand and testable also help it be reusable. IMO of course. I used DI a lot when I did Java work, it's a big thing in that community.

    One thing I do use is a metric ton of open source code in all my projects, but I'm not sure what if that's what you mean by code reuse.

  • magicalclick

    @Dr Herbie: I reuse the public methods from the existing system a lot. Well, it is the only way because I have no idea how to do those magic calculations. I copy and paste a lot from other methods to make a similar method calls. I don't make my own class, I don't include my own library, and I don't include 3rd party library. The chances are, there is already something in the existing system that I should use.

    Leaving WM on 5/2018 if no apps, no dedicated billboards where I drive, no Store name.
    Last modified
  • MasterPi

    Parts of my code become reusable when I need to test those segments rigorously. At that point, I can't assume anything about the input or business flow, so it ends up being self-contained. But, because I make it reusable doesn't necessarily mean that it actually gets reused. About the only thing I ever reused was a .NET middleware library for my c++ facial expression recognition engine. I almost always end up building on top of existing frameworks, so the code I actually write is application/business specifc.

    In any case, for a general practice, I'd go for a cross between Herbie's and Bas'...if it's obvious that you can factor it out into something that can sort of stand on its own, then do it. Otherwise, make it reusable on a need basis.

  • evildictait​or

    , Dr Herbie wrote

    @evildictaitor: I suspect that the context is important here -- you seem to work on lots of smaller project

    Not really. Six of the projects I'm a major contributor for have more than a million lines of code. Part of the reason for the design choice to move to everything is a library is to keep Intellisense from getting cluttered by classes and functions that aren't formal APIs into the code, massively increasing code findability.

    Actually, one of the reasons I insisted we went down this path was that I saw a huge push during the Vista timeframe at Microsoft to move their code into a fixed dependency model, and this was one of the ways that one of the teams tried to solve it (for C++, not C#). That codebase was Windows, so I'd certainly not say that carefully managing dependencies and scopes by forcing stuff into libraries can only work for small projects. I tried using it myself for a short while and it was so useful for my own projects that we trialled it at work and it was a massive success.

    Anyway, this isn't supposed to be advice, it's just a direct response to the question "how often do you reuse your code?". My answer is often.

  • OrigamiCar

    @Dr Herbie:

    Hi Herbie,

    I think the type of projects you work on can have a more than small influence on how reusable your code is.

    For example, at my company, I run the development teams and we're a large e-commerce operation. So we have our main e-commerce sites, additional micro-ecommerce sites, internal merchandising/reporting apps etc, our main back end order processing apps as well as web api's for additional vendor apps and other things all of which talk to each other to a greater or lesser extent.

    So, we have built layers of reusability across a lot of our software. Essentially we've thrown the most common things we do into a set of internal API's that are fully documented.
    At the basic level we have things like simple helper classes that wrap up the most commonly used areas of ADO.net, caching of data, putting things into Azure blob storage, etc. The idea there is our developers can connect to one of our databases, call a stored procedure and get back a dataset with a single line of code and as long as that particular API is used, vs. doing it all manually through ado.net then we have consistency. We know that there's going to be try... catch... finally around the actual ADO.net work, that the correct data connection is used etc.

    On top of that we have an additional layer for getting/putting data - so if the developer needs to get XX products in a group or do a keyword search for YYY items, then he/she calls the data request layer. This layer worries about if the data is already cached or not, if not then it knows where to get the data (database/blob storage/table storage/etc), put it into the cache for the correct amount of time and return it to the developer as the correct collection type expected and not just a dataset.

    All of this allows us to make sure that we have standard consistent ways of doing our most common things - so the end developer doesn't have to worry as much about the runtime performance of getting their data - if there's a method in our API for getting what they need, then we've already done the performance work on it. It's more work up front, but it covers around 75% of what our developers need to do - so it's been worth it and allows them to focus on the other 25% and give more attention to Win UI/web design work.

    As for my own personal projects - I have a fair bit of reusable classes I dip into as well for very common things I do a lot. Once again, data access, caching are covered there along with things like editing images, putting things into Azure blob storage and so on. I have a few websites, mobile apps and other things I work on in my free time, so code reuse makes sense for me. If I were just working on one large project only though, I probably wouldn't have so much reuse in there.

    Richard.

  • Richard.Hein

    @Dr Herbie:  True code reuse is traditionally very difficult to maintain in certain types of code, like LOB applications and e-commerce sites, where the code is often copy and pasted and customized, rather than refactored into truly reusable and composable framework and libraries.  It's often a matter of time and the ROI of spending time refactoring is something that many, many managers and stakeholders are unwilling to invest in, just as they are unwilling to invest in dealing with technical debt in general (and this is a form of technical debt).  So you end up with 100s of gigs of similar but incompatible types and functions and applications as a whole.

    One of the ways to really have true reuse is to write code as pure functions.  Because they are pure, they can be trusted to always do the same thing, but even if you do that, the types the functions operate on are not necessarily reusable in LOB and customized solutions over generalized templates, because they have customer specific fields and what not.  However, as we move forward to things like dependent types I hope to see solutions to that problem as well.  Hope is a key word here. 

    Good thread.  I'd like to dig into it more later if I have time.

  • bondsbw

    , Dr Herbie wrote

    So the reason for it's anti-pattern status is simply that it makes classes harder to re-use compared to Composition Root (because of having a dependency on the service locator).

    This really isn't what I took away from it.  Service locator is an anti-pattern that particularly affects larger systems developed my multiple developers.

    Deep in the object graph created by the IoC container, some developer will have a dependency that is not explicit.  They will need to instantiate an IFoo.  Now suppose another developer no longer needs his IFoo, and removes the code that adds it to the container (not realizing the deep dependency).  Whenever that deep code is run, instantiation of IFoo fails and the result is a bug.

    This is particularly problematic in deep code rarely covered by manual testing.  Such a bug could easily make it into production.

    The solution is to use typed factories.  In Castle Windsor, I used the Typed Factory Facility, which allows me to inject an IFactory<T> to resolve any T.  This makes the dependency explicit.  Then, I wrote a custom resolver that checks the whether that dependency is resolvable before injecting the IFactory<T>.  Because this is a deep check (ensuring the first T can be resolved, and then any IFactory<T> that is injected into it, then the factories injected into those, etc.), no matter how large the dependency graph, I get an immediate error if something deep down cannot resolve.

    (And recall that regular DI will give you all of this for anything you can resolve directly in the constructor.  All that the service locators, or typed factories, give you is for any case where you must resolve something after the object has been constructed.)

  • Charles

    It's also the best way to write portable code. What you're asking, though, is not a question of the technical merits of this pattern. Rather, it's a question about who's actively behaving in this way, day to day. I hope more and more folks start to do this right off the bat in their application projects so they can more easily port their applications to different application platforms.

    Code locally. Design globally.
    C

     

  • Dr Herbie

    @bondsbw: Like portability, multi-developer conflicts are not an issue I've ever had a problem with (there are only 6 developers in my company, but we often all work on the same project at the same time).  We've never had an instance of a developer removing a class from IoC without checking first that the interface is not used anywhere else -- that would be poor developer practice in my book, regardless of IoC technique.

     

    @Charles: I keep hearing people saying this kind of thing, but in my context this is not a major concern -- I'm writing LOB applications for a vertical market who will buy the hardware to suit the software, so portability is not an issue.

    For example, we are coded against SQL Server so that is what our customers buy to run our system. If MS were to kill off SQL Server, we would be stuffed because we use hand-coded and optimised stored procedures a lot for complex data-crunching. If we were to make that code portable to any database system we would lose the specific SQL Server optimisations and we would have a slow, unusable system.

    As long as WinForms is around, we don't have to change anything.  When we have to move to a new UI technology we just need to rewrite the UI elements using the same controllers; IoC and decoupling doesn't make a major impact on this as it's a 1 to 1 match between UI and controller.  If C# and .NET were to die, then no amount of portability will help.

    I think I'm on a down-swing with architecture -- yes it's important to have a deep understanding of all the principals and techniques, but it's also important to understand when they are overkill and would just waste time and add complexity for no or little real benefit. I guess it's architectural YAGNI.

     

    Herbie

  • bondsbw

    , Dr Herbie wrote

    We've never had an instance of a developer removing a class from IoC without checking first that the interface is not used anywhere else -- that would be poor developer practice in my book, regardless of IoC technique.

    Agreed.  To justify in my case, we are building an extensible application that uses IoC to control the extensibility (a la MEF).  The teams may not work together, work in the same company, or even be working with the same version of the main component.  It's even possible that extensions build a dependency upon each other.

  • Charles

    @Dr Herbie: Good points. I wasn't suggesting that you should always design portable systems. Sometimes, it just doesn't make sense (as in your case). Further, I'm really talking about user apps, not line of business applications that are necessarily platform specific.

    C

Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.